Sample records for randomized parallel design

  1. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1—Design

    PubMed Central

    Li, Fan; Gallis, John A.; Prague, Melanie; Murray, David M.

    2017-01-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis. PMID:28426295

  2. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design.

    PubMed

    Turner, Elizabeth L; Li, Fan; Gallis, John A; Prague, Melanie; Murray, David M

    2017-06-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis.

  3. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Can sequential parallel comparison design and two-way enriched design be useful in medical device clinical trials?

    PubMed

    Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin

    2016-01-01

    Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.

  5. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  6. Design of a switch matrix gate/bulk driver controller for thin film lithium microbatteries using microwave SOI technology

    NASA Technical Reports Server (NTRS)

    Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.; hide

    2003-01-01

    This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.

  7. Parallel Optical Random Access Memory (PORAM)

    NASA Technical Reports Server (NTRS)

    Alphonse, G. A.

    1989-01-01

    It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.

  8. An in silico approach helped to identify the best experimental design, population, and outcome for future randomized clinical trials.

    PubMed

    Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice

    2016-01-01

    The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    PubMed

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  10. Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆

    PubMed Central

    Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-01-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680

  11. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  12. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  13. Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.

    PubMed

    Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M

    2017-07-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.

  14. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  15. Dose finding with the sequential parallel comparison design.

    PubMed

    Wang, Jessie J; Ivanova, Anastasia

    2014-01-01

    The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.

  16. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  17. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    ERIC Educational Resources Information Center

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  18. Group Cognitive Behavioural Therapy and Group Recreational Activity for Adults with Autism Spectrum Disorders: A Preliminary Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne

    2014-01-01

    Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This "preliminary" randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive…

  19. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    PubMed

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  20. Comparison of intervention effects in split-mouth and parallel-arm randomized controlled trials: a meta-epidemiological study

    PubMed Central

    2014-01-01

    Background Split-mouth randomized controlled trials (RCTs) are popular in oral health research. Meta-analyses frequently include trials of both split-mouth and parallel-arm designs to derive combined intervention effects. However, carry-over effects may induce bias in split- mouth RCTs. We aimed to assess whether intervention effect estimates differ between split- mouth and parallel-arm RCTs investigating the same questions. Methods We performed a meta-epidemiological study. We systematically reviewed meta- analyses including both split-mouth and parallel-arm RCTs with binary or continuous outcomes published up to February 2013. Two independent authors selected studies and extracted data. We used a two-step approach to quantify the differences between split-mouth and parallel-arm RCTs: for each meta-analysis. First, we derived ratios of odds ratios (ROR) for dichotomous data and differences in standardized mean differences (∆SMD) for continuous data; second, we pooled RORs or ∆SMDs across meta-analyses by random-effects meta-analysis models. Results We selected 18 systematic reviews, for 15 meta-analyses with binary outcomes (28 split-mouth and 28 parallel-arm RCTs) and 19 meta-analyses with continuous outcomes (28 split-mouth and 28 parallel-arm RCTs). Effect estimates did not differ between split-mouth and parallel-arm RCTs (mean ROR, 0.96, 95% confidence interval 0.52–1.80; mean ∆SMD, 0.08, -0.14–0.30). Conclusions Our study did not provide sufficient evidence for a difference in intervention effect estimates derived from split-mouth and parallel-arm RCTs. Authors should consider including split-mouth RCTs in their meta-analyses with suitable and appropriate analysis. PMID:24886043

  1. Supervised Home Training of Dialogue Skills in Chronic Aphasia: A Randomized Parallel Group Study

    ERIC Educational Resources Information Center

    Nobis-Bosch, Ruth; Springer, Luise; Radermacher, Irmgard; Huber, Walter

    2011-01-01

    Purpose: The aim of this study was to prove the efficacy of supervised self-training for individuals with aphasia. Linguistic and communicative performance in structured dialogues represented the main study parameters. Method: In a cross-over design for randomized matched pairs, 18 individuals with chronic aphasia were examined during 12 weeks of…

  2. A Randomized Controlled Trial of Trauma-Focused Cognitive Behavioral Therapy for Sexually Exploited, War-Affected Congolese Girls

    ERIC Educational Resources Information Center

    O'Callaghan, Paul; McMullen, John; Shannon, Ciaran; Rafferty, Harry; Black, Alastair

    2013-01-01

    Objective: To assess the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) delivered by nonclinical facilitators in reducing posttraumatic stress, depression, and anxiety and conduct problems and increasing prosocial behavior in a group of war-affected, sexually exploited girls in a single-blind, parallel-design, randomized,…

  3. A two-way enriched clinical trial design: combining advantages of placebo lead-in and randomized withdrawal.

    PubMed

    Ivanova, Anastasia; Tamura, Roy N

    2015-12-01

    A new clinical trial design, designated the two-way enriched design (TED), is introduced, which augments the standard randomized placebo-controlled trial with second-stage enrichment designs in placebo non-responders and drug responders. The trial is run in two stages. In the first stage, patients are randomized between drug and placebo. In the second stage, placebo non-responders are re-randomized between drug and placebo and drug responders are re-randomized between drug and placebo. All first-stage data, and second-stage data from first-stage placebo non-responders and first-stage drug responders, are utilized in the efficacy analysis. The authors developed one, two and three degrees of freedom score tests for treatment effect in the TED and give formulae for asymptotic power and for sample size computations. The authors compute the optimal allocation ratio between drug and placebo in the first stage for the TED and compare the operating characteristics of the design to the standard parallel clinical trial, placebo lead-in and randomized withdrawal designs. Two motivating examples from different disease areas are presented to illustrate the possible design considerations. © The Author(s) 2011.

  4. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  5. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  6. VISUAL AND AUDIO PRESENTATION IN MACHINE PROGRAMED INSTRUCTION. FINAL REPORT.

    ERIC Educational Resources Information Center

    ALLEN, WILLIAM H.

    THIS STUDY WAS PART OF A LARGER RESEARCH PROGRAM AIMED TOWARD DEVELOPMENT OF PARADIGMS OF MESSAGE DESIGN. OBJECTIVES OF THREE PARALLEL EXPERIMENTS WERE TO EVALUATE INTERACTIONS OF PRESENTATION MODE, PROGRAM TYPE, AND CONTENT AS THEY AFFECT LEARNER CHARACTERISTICS. EACH EXPERIMENT USED 18 TREATMENTS IN A FACTORIAL DESIGN WITH RANDOMLY SELECTED…

  7. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  8. Preference option randomized design (PORD) for comparative effectiveness research: Statistical power for testing comparative effect, preference effect, selection effect, intent-to-treat effect, and overall effect.

    PubMed

    Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith

    2017-01-01

    Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.

  9. A novel ternary content addressable memory design based on resistive random access memory with high intensity and low search energy

    NASA Astrophysics Data System (ADS)

    Han, Runze; Shen, Wensheng; Huang, Peng; Zhou, Zheng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    A novel ternary content addressable memory (TCAM) design based on resistive random access memory (RRAM) is presented. Each TCAM cell consists of two parallel RRAM to both store and search for ternary data. The cell size of the proposed design is 8F2, enable a ∼60× cell area reduction compared with the conventional static random access memory (SRAM) based implementation. Simulation results also show that the search delay and energy consumption of the proposed design at the 64-bit word search are 2 ps and 0.18 fJ/bit/search respectively at 22 nm technology node, where significant improvements are achieved compared to previous works. The desired characteristics of RRAM for implementation of the high performance TCAM search chip are also discussed.

  10. Impact of a Community-Based Intervention on Serving and Intake of Vegetables among Low-Income, Rural Appalachian Families

    ERIC Educational Resources Information Center

    Wenrich, Tionni R.; Brown, J. Lynne; Wilson, Robin Taylor; Lengerich, Eugene J.

    2012-01-01

    Objective: To evaluate the effectiveness of a community-based intervention promoting the serving and eating of deep-orange, cruciferous, and dark-green leafy vegetables. Design: Randomized, parallel-group, community-based intervention with a baseline/postintervention/3-month follow-up design. Setting and Participants: Low-income food preparers (n…

  11. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  12. Optimization under uncertainty of parallel nonlinear energy sinks

    NASA Astrophysics Data System (ADS)

    Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe

    2017-04-01

    Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.

  13. An efficient dynamic load balancing algorithm

    NASA Astrophysics Data System (ADS)

    Lagaros, Nikos D.

    2014-01-01

    In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.

  14. A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration

    NASA Astrophysics Data System (ADS)

    Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves

    2011-07-01

    An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.

  15. The Effect of a Microprocessor Prosthetic Foot on Function and Quality of Life in Transtibial Amputees Who Are Limited Community Ambulators

    DTIC Science & Technology

    2017-09-01

    parallel, randomized, controlled clinical trial designed to determine if a microprocessor controlled prosthetic foot (MPF), with greater range of...clinical trial designed to determine if a microprocessor controlled prosthetic foot (MPF), with greater range of motion and active power, will...Department of the Army position, policy or decision unless so designated by other documentation. CONTRACTING ORGANIZATION: University of Tennessee

  16. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  17. An Evaluation of the Effectiveness of the Remediation Plus Program on Improving Reading Achievement of Students in the Marinette (WI) School District

    ERIC Educational Resources Information Center

    Corcoran, Roisin P.; Ross, Steven M.

    2015-01-01

    The study was implemented in the Title I Marinette School District using a randomized experimental design and parallel quasi experimental design spanning three grades 1-3 in 3 district elementary schools. The Remediation Plus Intervention is a multi-sensory, systematic synthetic phonics curriculum for all ages of students who struggle with…

  18. Efficacy and safety of sacubitril/valsartan (LCZ696) in Japanese patients with chronic heart failure and reduced ejection fraction: Rationale for and design of the randomized, double-blind PARALLEL-HF study.

    PubMed

    Tsutsui, Hiroyuki; Momomura, Shinichi; Saito, Yoshihiko; Ito, Hiroshi; Yamamoto, Kazuhiro; Ohishi, Tomomi; Okino, Naoko; Guo, Weinong

    2017-09-01

    The prognosis of heart failure patients with reduced ejection fraction (HFrEF) in Japan remains poor, although there is growing evidence for increasing use of evidence-based pharmacotherapies in Japanese real-world HF registries. Sacubitril/valsartan (LCZ696) is a first-in-class angiotensin receptor neprilysin inhibitor shown to reduce mortality and morbidity in the recently completed largest outcome trial in patients with HFrEF (PARADIGM-HF trial). The prospectively designed phase III PARALLEL-HF (Prospective comparison of ARNI with ACE inhibitor to determine the noveL beneficiaL trEatment vaLue in Japanese Heart Failure patients) study aims to assess the clinical efficacy and safety of LCZ696 in Japanese HFrEF patients, and show similar improvements in clinical outcomes as the PARADIGM-HF study enabling the registration of LCZ696 in Japan. This is a multicenter, randomized, double-blind, parallel-group, active controlled study of 220 Japanese HFrEF patients. Eligibility criteria include a diagnosis of chronic HF (New York Heart Association Class II-IV) and reduced ejection fraction (left ventricular ejection fraction ≤35%) and increased plasma concentrations of natriuretic peptides [N-terminal pro B-type natriuretic peptide (NT-proBNP) ≥600pg/mL, or NT-proBNP ≥400pg/mL for those who had a hospitalization for HF within the last 12 months] at the screening visit. The study consists of three phases: (i) screening, (ii) single-blind active LCZ696 run-in, and (iii) double-blind randomized treatment. Patients tolerating LCZ696 50mg bid during the treatment run-in are randomized (1:1) to receive LCZ696 100mg bid or enalapril 5mg bid for 4 weeks followed by up-titration to target doses of LCZ696 200mg bid or enalapril 10mg bid in a double-blind manner. The primary outcome is the composite of cardiovascular death or HF hospitalization and the study is an event-driven trial. The design of the PARALLEL-HF study is aligned with the PARADIGM-HF study and aims to assess the efficacy and safety of LCZ696 in Japanese HFrEF patients. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  20. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  1. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  2. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  3. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems.

    DTIC Science & Technology

    1985-05-01

    unit in the data base, with knowing one generic assembly language. °-’--a 139 The 5-tuple describing single operation execution time of the operations...TSi-- generate , random eventi ( ,.0-15 tieit tmls - ((floa egus ()16 274 r Ispt imet imel I at :EVE’JS- II ktime=0.0; /0 present time 0/ rrs ptime=0.0...computing machinery capable of performing these tasks within a given time constraint. Because the majority of the available computing machinery is general

  4. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  5. Interventions to Reduce Distress in Adult Victims of Rape and Sexual Violence: A Systematic Review

    ERIC Educational Resources Information Center

    Regehr, Cheryl; Alaggia, Ramona; Dennis, Jane; Pitts, Annabel; Saini, Michael

    2013-01-01

    Objectives: This article presents a systematic evaluation of the effectiveness of interventions aimed at reducing distress in adult victims of rape and sexual violence. Method: Studies were eligible for the review if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design. Results:…

  6. Design paper: the DEMO trial: a randomized, parallel-group, observer-blinded clinical trial of aerobic versus non-aerobic versus relaxation training for patients with light to moderate depression.

    PubMed

    Krogh, Jesper; Petersen, Lone; Timmermann, Michael; Saltin, Bengt; Nordentoft, Merete

    2007-01-01

    In western countries, the yearly incidence of depression is estimated to be 3-5% and the lifetime prevalence is 17%. In patient populations with chronic diseases the point prevalence may be 20%. Depression is associated with increased risk for various conditions such as osteoporoses, cardiovascular diseases, and dementia. WHO stated in 2000 that depression was the fourth leading cause of disease burden in terms of disability. In 2000 the cost of depression in the US was estimated to 83 billion dollars. A predominance of trials suggests that physical exercise has a positive effect on depressive symptoms. However, a meta-analysis from 2001 stated: "The effectiveness of exercise in reducing symptoms of depression cannot be determined because of a lack of good quality research on clinical populations with adequate follow-up." The major objective for this randomized trial is to compare the effect of non-aerobic, aerobic, and relaxation training on depressive symptoms using the blindly assessed Hamilton depression scale (HAM-D(17)) as primary outcome. The secondary outcome is the effect of the intervention on working status (i.e., lost days from work, employed/unemployed) and the tertiary outcomes consist of biological responses. The trial is designed as a randomized, parallel-group, observer-blinded clinical trial. Patients are recruited through general practitioners and psychiatrist and randomized to three different interventions: 1) non-aerobic, -- progressive resistance training, 2) aerobic training, -- cardio respiratory fitness, and 3) relaxation training with minimal impact on strength or cardio respiratory fitness. Training for all three groups takes place twice a week for 4 months. Evaluation of patients' symptoms takes place four and 12 months after inclusion. The trial is designed to include 45 patients in each group. Statistical analysis will be done as intention to treat (all randomized patients). Results from the DEMO trial will be reported according to the CONSORT guidelines in 2008-2009.

  7. The efficacy and safety of Fufangdanshen tablets (Radix Salviae miltiorrhizae formula tablets) for mild to moderate vascular dementia: a study protocol for a randomized controlled trial.

    PubMed

    Tian, Jinzhou; Shi, Jing; Wei, Mingqing; Qin, Renan; Ni, Jingnian; Zhang, Xuekai; Li, Ting; Wang, Yongyan

    2016-06-08

    Vascular dementia (VaD) is the second most common subtype of dementia after Alzheimer's disease (AD). Currently, there are no medications approved for treating patients with VaD. Fufangdanshen (FFDS) tablets (Radix Salviae miltiorrhizae formula tablets) are a traditional Chinese medicine that has been reported to improve memory. However, the existing evidence for FFDS tablets in clinical practice derives from methodologically flawed studies. To further investigate the safety, tolerability, and efficacy of FFDS tables in the treatment of mild to moderate VaD, we designed and reported the methodology for a 24-week randomized, double-blind, parallel, multicenter study. This ongoing study is a double-blind, randomized, parallel placebo-controlled trial. A total of 240 patients with mild to moderate VaD will be enrolled. After a 2-week run-in period, the eligible patients will be randomized to receive either three FFDS or placebo tablets three times per day for 24 weeks, with a follow-up 12 weeks after the last treatment. The primary efficacy measurement will be the Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog) and the Clinician Interview-Based Impression of Change (CIBIC-plus). The secondary efficacy measurements will include the Mini Mental State Examination (MMSE) and activities of daily living (ADL). Adverse events will also be reported. This randomized trial will be the first rigorous study on the efficacy and safety of FFDS tablets for treating cognitive symptoms in patients with VaD using a rational design. ClinicalTrials.gov: NCT01761227 . Registered on 2 January 2013.

  8. Continuous quality improvement interventions to improve long-term outcomes of antiretroviral therapy in women who initiated therapy during pregnancy or breastfeeding in the Democratic Republic of Congo: design of an open-label, parallel, group randomized trial.

    PubMed

    Yotebieng, Marcel; Behets, Frieda; Kawende, Bienvenu; Ravelomanana, Noro Lantoniaina Rosa; Tabala, Martine; Okitolonda, Emile W

    2017-04-26

    Despite the rapid adoption of the World Health Organization's 2013 guidelines, children continue to be infected with HIV perinatally because of sub-optimal adherence to the continuum of HIV care in maternal and child health (MCH) clinics. To achieve the UNAIDS goal of eliminating mother-to-child HIV transmission, multiple, adaptive interventions need to be implemented to improve adherence to the HIV continuum. The aim of this open label, parallel, group randomized trial is to evaluate the effectiveness of Continuous Quality Improvement (CQI) interventions implemented at facility and health district levels to improve retention in care and virological suppression through 24 months postpartum among pregnant and breastfeeding women receiving ART in MCH clinics in Kinshasa, Democratic Republic of Congo. Prior to randomization, the current monitoring and evaluation system will be strengthened to enable collection of high quality individual patient-level data necessary for timely indicators production and program outcomes monitoring to inform CQI interventions. Following randomization, in health districts randomized to CQI, quality improvement (QI) teams will be established at the district level and at MCH clinics level. For 18 months, QI teams will be brought together quarterly to identify key bottlenecks in the care delivery system using data from the monitoring system, develop an action plan to address those bottlenecks, and implement the action plan at the level of their district or clinics. If proven to be effective, CQI as designed here, could be scaled up rapidly in resource-scarce settings to accelerate progress towards the goal of an AIDS free generation. The protocol was retrospectively registered on February 7, 2017. ClinicalTrials.gov Identifier: NCT03048669 .

  9. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  10. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  11. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Minimally invasive strabismus surgery versus paralimbal approach: A randomized, parallel design study is minimally invasive strabismus surgery worth the effort?

    PubMed Central

    Sharma, Richa; Amitava, Abadan K; Bani, Sadat AO

    2014-01-01

    Introduction: Minimal access surgery is common in all fields of medicine. We compared a new minimally invasive strabismus surgery (MISS) approach with a standard paralimbal strabismus surgery (SPSS) approach in terms of post-operative course. Materials and Methods: This parallel design study was done on 28 eyes of 14 patients, in which one eye was randomized to MISS and the other to SPSS. MISS was performed by giving two conjunctival incisions parallel to the horizontal rectus muscles; performing recession or resection below the conjunctival strip so obtained. We compared post-operative redness, congestion, chemosis, foreign body sensation (FBS), and drop intolerance (DI) on a graded scale of 0 to 3 on post-operative day 1, at 2-3 weeks, and 6 weeks. In addition, all scores were added to obtain a total inflammatory score (TIS). Statistical Analysis: Inflammatory scores were analyzed using Wilcoxon's signed rank test. Results: On the first post-operative day, only FBS (P =0.01) and TIS (P =0.04) showed significant difference favoring MISS. At 2-3 weeks, redness (P =0.04), congestion (P =0.04), FBS (P =0.02), and TIS (P =0.04) were significantly less in MISS eye. At 6 weeks, only redness (P =0.04) and TIS (P =0.05) were significantly less. Conclusion: MISS is more comfortable in the immediate post-operative period and provides better cosmesis in the intermediate period. PMID:24088635

  13. The effects of assertiveness training in patients with schizophrenia: a randomized, single-blind, controlled study.

    PubMed

    Lee, Tso-Ying; Chang, Shih-Chin; Chu, Hsin; Yang, Chyn-Yng; Ou, Keng-Liang; Chung, Min-Huey; Chou, Kuei-Ru

    2013-11-01

    In this study, we investigated the effects of group assertiveness training on assertiveness, social anxiety and satisfaction with interpersonal communication among patients with chronic schizophrenia. Only limited studies highlighted the effectiveness of group assertiveness training among inpatients with schizophrenia. Given the lack of group assertiveness training among patients with schizophrenia, further development of programmes focusing on facilitating assertiveness, self-confidence and social skills among inpatients with chronic schizophrenia is needed. This study used a prospective, randomized, single-blinded, parallel-group design. This study employed a prospective, randomized, parallel-group design. Seventy-four patients were randomly assigned to experimental group receiving 12 sessions of assertiveness training, or a supportive control group. Data collection took place for the period of June 2009-July 2010. Among patients with chronic schizophrenia, assertiveness, levels of social anxiety and satisfaction with interpersonal communication significantly improved immediately after the intervention and at the 3-month follow-up in the intervention group. The results of a generalized estimating equation (GEE) indicated that: (1) assertiveness significantly improved from pre- to postintervention and was maintained until the follow-up; (2) anxiety regarding social interactions significantly decreased after assertiveness training; and (3) satisfaction with interpersonal communication slightly improved after the 12-session intervention and at the 3-month follow-up. Assertivenss training is a non-invasive and inexpensive therapy that appears to improve assertiveness, social anxiety and interpersonal communication among inpatients with chronic schizophrenia. These findings may provide a reference guide to clinical nurses for developing assertiveness-training protocols. © 2013 Blackwell Publishing Ltd.

  14. Screening unlabeled DNA targets with randomly ordered fiber-optic gene arrays.

    PubMed

    Steemers, F J; Ferguson, J A; Walt, D R

    2000-01-01

    We have developed a randomly ordered fiber-optic gene array for rapid, parallel detection of unlabeled DNA targets with surface immobilized molecular beacons (MB) that undergo a conformational change accompanied by a fluorescence change in the presence of a complementary DNA target. Microarrays are prepared by randomly distributing MB-functionalized 3-microm diameter microspheres in an array of wells etched in a 500-microm diameter optical imaging fiber. Using several MBs, each designed to recognize a different target, we demonstrate the selective detection of genomic cystic fibrosis related targets. Positional registration and fluorescence response monitoring of the microspheres was performed using an optical encoding scheme and an imaging fluorescence microscope system.

  15. An overview of confounding. Part 1: the concept and how to address it.

    PubMed

    Howards, Penelope P

    2018-04-01

    Confounding is an important source of bias, but it is often misunderstood. We consider how confounding occurs and how to address confounding using examples. Study results are confounded when the effect of the exposure on the outcome, mixes with the effects of other risk and protective factors for the outcome. This problem arises when these factors are present to different degrees among the exposed and unexposed study participants, but not all differences between the groups result in confounding. Thinking about an ideal study where all of the population of interest is exposed in one universe and is unexposed in a parallel universe helps to distinguish confounders from other differences. In an actual study, an observed unexposed population is chosen to stand in for the unobserved parallel universe. Differences between this substitute population and the parallel universe result in confounding. Confounding by identified factors can be addressed analytically and through study design, but only randomization has the potential to address confounding by unmeasured factors. Nevertheless, a given randomized study may still be confounded. Confounded study results can lead to incorrect conclusions about the effect of the exposure of interest on the outcome. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  16. Sample size calculations for the design of cluster randomized trials: A summary of methodology.

    PubMed

    Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David

    2015-05-01

    Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Unique Study Designs in Nephrology: N-of-1 Trials and Other Designs.

    PubMed

    Samuel, Joyce P; Bell, Cynthia S

    2016-11-01

    Alternatives to the traditional parallel-group trial design may be required to answer clinical questions in special populations, rare conditions, or with limited resources. N-of-1 trials are a unique trial design which can inform personalized evidence-based decisions for the patient when data from traditional clinical trials are lacking or not generalizable. A concise overview of factorial design, cluster randomization, adaptive designs, crossover studies, and n-of-1 trials will be provided along with pertinent examples in nephrology. The indication for analysis strategies such as equivalence and noninferiority trials will be discussed, as well as analytic pitfalls. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  18. Using re-randomization to increase the recruitment rate in clinical trials - an assessment of three clinical areas.

    PubMed

    Kahan, Brennan C

    2016-12-13

    Patient recruitment in clinical trials is often challenging, and as a result, many trials are stopped early due to insufficient recruitment. The re-randomization design allows patients to be re-enrolled and re-randomized for each new treatment episode that they experience. Because it allows multiple enrollments for each patient, this design has been proposed as a way to increase the recruitment rate in clinical trials. However, it is unknown to what extent recruitment could be increased in practice. We modelled the expected recruitment rate for parallel-group and re-randomization trials in different settings based on estimates from real trials and datasets. We considered three clinical areas: in vitro fertilization, severe asthma exacerbations, and acute sickle cell pain crises. We compared the two designs in terms of the expected time to complete recruitment, and the sample size recruited over a fixed recruitment period. Across the different scenarios we considered, we estimated that re-randomization could reduce the expected time to complete recruitment by between 4 and 22 months (relative reductions of 19% and 45%), or increase the sample size recruited over a fixed recruitment period by between 29% and 171%. Re-randomization can increase recruitment most for trials with a short follow-up period, a long trial recruitment duration, and patients with high rates of treatment episodes. Re-randomization has the potential to increase the recruitment rate in certain settings, and could lead to quicker and more efficient trials in these scenarios.

  19. Neuroendocrine and Inflammatory Responses to Losartan and Continuous Positive Airway Pressure in Patients with Hypertension and Obstructive Sleep Apnea. A Randomized Controlled Trial.

    PubMed

    Thunström, Erik; Manhem, Karin; Yucel-Lindberg, Tülay; Rosengren, Annika; Lindberg, Caroline; Peker, Yüksel

    2016-11-01

    Blood pressure reduction in response to antihypertensive agents is less for patients with obstructive sleep apnea (OSA). Increased sympathetic and inflammatory activity, as well as alterations in the renin-angiotensin-aldosterone system, may play a role in this context. To address the cardiovascular mechanisms involved in response to an angiotensin II receptor antagonist, losartan, and continuous positive airway pressure (CPAP) as add-on treatment for hypertension and OSA. Newly diagnosed hypertensive patients with or without OSA (allocated in a 2:1 ratio for OSA vs. no OSA) were treated with losartan 50 mg daily during a 6-week two-center, open-label, prospective, case-control, parallel-design study. In the second 6-week, sex-stratified, open-label, randomized, parallel-design study, all subjects with OSA continued to receive losartan and were randomly assigned to either CPAP as add-on therapy or to no CPAP (1:1 ratio for CPAP vs. no CPAP). Study subjects without OSA were followed in parallel while they continued to take losartan. Blood samples were collected at baseline, after 6 weeks, and after 12 weeks for analysis of renin, aldosterone, noradrenaline, adrenaline, and inflammatory markers. Fifty-four patients with OSA and 35 without OSA were included in the first 6-week study. Losartan significantly increased renin levels and reduced aldosterone levels in the group without OSA. There was no significant decrease in aldosterone levels among patients with OSA. Add-on CPAP treatment tended to lower aldosterone levels, but reductions were more pronounced in measures of sympathetic activity. No significant changes in inflammatory markers were observed following treatment with losartan and CPAP. Hypertensive patients with OSA responded to losartan treatment with smaller reductions in aldosterone compared with hypertensive patients without OSA. Sympathetic system activity seemed to respond primarily to add-on CPAP treatment in patients with newly discovered hypertension and OSA. Clinical trial registered with www.clinicaltrials.gov (NCT00701428).

  20. Clamp-Crushing versus stapler hepatectomy for transection of the parenchyma in elective hepatic resection (CRUNSH) - A randomized controlled trial (NCT01049607)

    PubMed Central

    2011-01-01

    Background Hepatic resection is still associated with significant morbidity. Although the period of parenchymal transection presents a crucial step during the operation, uncertainty persists regarding the optimal technique of transection. It was the aim of the present randomized controlled trial to evaluate the efficacy and safety of hepatic resection using the technique of stapler hepatectomy compared to the simple clamp-crushing technique. Methods/Design The CRUNSH Trial is a prospective randomized controlled single-center trial with a two-group parallel design. Patients scheduled for elective hepatic resection without extrahepatic resection at the Department of General-, Visceral- and Transplantation Surgery, University of Heidelberg are enrolled into the trial and randomized intraoperatively to hepatic resection by the clamp-crushing technique and stapler hepatectomy, respectively. The primary endpoint is total intraoperative blood loss. A set of general and surgical variables are documented as secondary endpoints. Patients and outcome-assessors are blinded for the treatment intervention. Discussion The CRUNSH Trial is the first randomized controlled trial to evaluate efficacy and safety of stapler hepatectomy compared to the clamp-crushing technique for parenchymal transection during elective hepatic resection. Trial Registration ClinicalTrials.gov: NCT01049607 PMID:21888669

  1. Randomized, Controlled Trial of CBT Training for PTSD Providers

    DTIC Science & Technology

    2016-10-29

    trial and comparative effectiveness study is to design, implement and evaluate a cost effective, web based self paced training program to provide skills...without web -centered supervision, may provide an effective means to train increasing numbers of mental health providers in relevant, evidence-based...in equal numbers to three parallel intervention condition: a) Web -based training plus web -centered supervision; b) Web - based training alone; and c

  2. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  3. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications

    PubMed Central

    Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831

  4. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    PubMed

    Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.

  5. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  6. A critical appraisal of the reporting quality of published randomized controlled trials in the fall injuries.

    PubMed

    Asghari Jafarabadi, Mohammad; Sadeghi-Bazrgani, Homayoun; Dianat, Iman

    2018-06-01

    To evaluate the quality of reporting in published randomized controlled trials (RTCs) in the field of fall injuries. The 188 RTCs published between 2001 and 2011, indexed in EMBASE and Medline databases were extracted through searching by appropriate keywords and EMTree classification terms. The evaluation trustworthiness was assured through parallel evaluations of two experts in epidemiology and biostatistics. About 40%-75% of papers had problems in reporting random allocation method, allocation concealment, random allocation implementation, blinding and similarity among groups, intention to treat and balancing benefits and harms. Moreover, at least 10% of papers inappropriately/not reported the design, protocol violations, sample size justification, subgroup/adjusted analyses, presenting flow diagram, drop outs, recruitment time, baseline data, suitable effect size on outcome, ancillary analyses, limitations and generalizability. Considering the shortcomings found and due to the importance of the RCTs for fall injury prevention programmes, their reporting quality should be improved.

  7. Study protocol of Prednisone in episodic Cluster Headache (PredCH): a randomized, double-blind, placebo-controlled parallel group trial to evaluate the efficacy and safety of oral prednisone as an add-on therapy in the prophylactic treatment of episodic cluster headache with verapamil

    PubMed Central

    2013-01-01

    Background Episodic cluster headache (ECH) is a primary headache disorder that severely impairs patient’s quality of life. First-line therapy in the initiation of a prophylactic treatment is verapamil. Due to its delayed onset of efficacy and the necessary slow titration of dosage for tolerability reasons prednisone is frequently added by clinicians to the initial prophylactic treatment of a cluster episode. This treatment strategy is thought to effectively reduce the number and intensity of cluster attacks in the beginning of a cluster episode (before verapamil is effective). This study will assess the efficacy and safety of oral prednisone as an add-on therapy to verapamil and compare it to a monotherapy with verapamil in the initial prophylactic treatment of a cluster episode. Methods and design PredCH is a prospective, randomized, double-blind, placebo-controlled trial with parallel study arms. Eligible patients with episodic cluster headache will be randomized to a treatment intervention with prednisone or a placebo arm. The multi-center trial will be conducted in eight German headache clinics that specialize in the treatment of ECH. Discussion PredCH is designed to assess whether oral prednisone added to first-line agent verapamil helps reduce the number and intensity of cluster attacks in the beginning of a cluster episode as compared to monotherapy with verapamil. Trial registration German Clinical Trials Register DRKS00004716 PMID:23889923

  8. Randomized placebo controlled blinded study to assess valsartan efficacy in preventing left ventricle remodeling in patients with dual chamber pacemaker--Rationale and design of the trial.

    PubMed

    Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa

    2015-05-01

    Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Effects of a Short Course of Eszopiclone on Continuous Positive Airway Pressure Adherence

    DTIC Science & Technology

    2009-11-17

    We collected addi- tional data related to mood and depression, libido and erectile dysfunction , and quality of life that will be in- cluded in...onset of therapy improves long-term CPAP adherence more than placebo in adults with obstructive sleep apnea. Design: Parallel randomized, placebo...collected. (ClinicalTrials.gov registration number: NCT00612157) Setting: Academic sleep disorder center. Patients: 160 adults (mean age, 45.7 years [SD

  10. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  11. Efficacy and safety of rasagiline as an adjunct to levodopa treatment in Chinese patients with Parkinson's disease: a randomized, double-blind, parallel-controlled, multi-centre trial.

    PubMed

    Zhang, Lina; Zhang, Zhiqin; Chen, Yangmei; Qin, Xinyue; Zhou, Huadong; Zhang, Chaodong; Sun, Hongbin; Tang, Ronghua; Zheng, Jinou; Yi, Lin; Deng, Liying; Li, Jinfang

    2013-08-01

    Rasagiline mesylate is a highly potent, selective and irreversible monoamine oxidase type B (MAOB) inhibitor and is effective as monotherapy or adjunct to levodopa for patients with Parkinson's disease (PD). However, few studies have evaluated the efficacy and safety of rasagiline in the Chinese population. This study was designed to investigate the safety and efficacy of rasagiline as adjunctive therapy to levodopa treatment in Chinese PD patients. This was a randomized, double-blind, placebo-controlled, parallel-group, multi-centre trial conducted over a 12-wk period that enrolled 244 PD patients with motor fluctuations. Participants were randomly assigned to oral rasagiline mesylate (1 mg) or placebo, once daily. Altogether, 219 patients completed the trial. Rasagiline showed significantly greater efficacy compared with placebo. During the treatment period, the primary efficacy variable--mean adjusted total daily off time--decreased from baseline by 1.7 h in patients treated with 1.0 mg/d rasagiline compared to placebo (p < 0.05). Scores using the Unified Parkinson's Disease Rating Scale also improved during rasagiline treatment. Rasagiline was well tolerated. This study demonstrated that rasagiline mesylate is effective and well tolerated as an adjunct to levodopa treatment in Chinese PD patients with fluctuations.

  12. Protocol design and current status of CLIVIT: a randomized controlled multicenter relevance trial comparing clips versus ligatures in thyroid surgery

    PubMed Central

    Seiler, CM; Fröhlich, BE; Veit, JA; Gazyakan, E; Wente, MN; Wollermann, C; Deckert, A; Witte, S; Victor, N; Buchler, MW; Knaebel, HP

    2006-01-01

    Background Annually, more than 90000 surgical procedures of the thyroid gland are performed in Germany. Strategies aimed at reducing the duration of the surgical procedure are relevant to patients and the health care system especially in the context of reducing costs. However, new techniques for quick and safe hemostasis have to be tested in clinically relevance randomized controlled trials before a general recommendation can be given. The current standard for occlusion of blood vessels in thyroid surgery is ligatures. Vascular clips may be a safe alternative but have not been investigated in a large RCT. Methods/design CLIVIT (Clips versus Ligatures in Thyroid Surgery) is an investigator initiated, multicenter, patient-blinded, two-group parallel relevance randomized controlled trial designed by the Study Center of the German Surgical Society. Patients scheduled for elective resection of at least two third of the gland for benign thyroid disease are eligible for participation. After surgical exploration patients are randomized intraoperatively into either the conventional ligature group, or into the clip group. The primary objective is to test for a relevant reduction in operating time (at least 15 min) when using the clip technique. Since April 2004, 121 of the totally required 420 patients were randomized in five centers. Discussion As in all trials the different forms of bias have to be considered, and as in this case, a surgical trial, the role of surgical expertise plays a key role, and will be documented and analyzed separately. This is the first randomized controlled multicenter relevance trial to compare different vessel occlusion techniques in thyroid surgery with adequate power and other detailed information about the design as well as framework. If significant, the results might be generalized and may change the current surgical practice. PMID:16948853

  13. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  14. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  15. A randomized evaluation of a computer-based physician's workstation: design considerations and baseline results.

    PubMed Central

    Rotman, B. L.; Sullivan, A. N.; McDonald, T.; DeSmedt, P.; Goodnature, D.; Higgins, M.; Suermondt, H. J.; Young, C. Y.; Owens, D. K.

    1995-01-01

    We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias. PMID:8563376

  16. A parallel randomized trial on the effect of a healthful diet on inflammageing and its consequences in European elderly people: design of the NU-AGE dietary intervention study.

    PubMed

    Berendsen, Agnes; Santoro, Aurelia; Pini, Elisa; Cevenini, Elisa; Ostan, Rita; Pietruszka, Barbara; Rolf, Katarzyna; Cano, Noël; Caille, Aurélie; Lyon-Belgy, Noëlle; Fairweather-Tait, Susan; Feskens, Edith; Franceschi, Claudio; de Groot, C P G M

    2013-01-01

    The proportion of European elderly is expected to increase to 30% in 2060. Combining dietary components may modulate many processes involved in ageing. So, it is likely that a healthful diet approach might have greater favourable impact on age-related decline than individual dietary components. This paper describes the design of a healthful diet intervention on inflammageing and its consequences in the elderly. The NU-AGE study is a parallel randomized one-year trial in 1250 apparently healthy, independently living European participants aged 65-80 years. Participants are randomised into either the diet group or control group. Participants in the diet group received dietary advice aimed at meeting the nutritional requirements of the ageing population. Special attention was paid to nutrients that may be inadequate or limiting in diets of elderly, such as vitamin D, vitamin B12, and calcium. C-reactive protein is measured as primary outcome. The NU-AGE study is the first dietary intervention investigating the effect of a healthful diet providing targeted nutritional recommendations for optimal health and quality of life in apparently healthy European elderly. Results of this intervention will provide evidence on the effect of a healthful diet on the prevention of age related decline. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Optimality, sample size, and power calculations for the sequential parallel comparison design.

    PubMed

    Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A

    2011-10-15

    The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Multicenter Randomized Controlled Trial on Duration of Therapy for Thrombosis in Children and Young Adults (Kids-DOTT): Pilot/Feasibility Phase Findings

    PubMed Central

    Goldenberg, N.A.; Abshire, T.; Blatchford, P.J.; Fenton, L.Z.; Halperin, J.L.; Hiatt, W.R.; Kessler, C.M.; Kittelson, J.M.; Manco-Johnson, M.J.; Spyropoulos, A.C.; Steg, P.G.; Stence, N.V.; Turpie, A.G.G.; Schulman, S.

    2015-01-01

    BACKGROUND Randomized controlled trials (RCTs) in pediatric venous thromboembolism (VTE) treatment have been challenged by unsubstantiated design assumptions and/or poor accrual. Pilot/feasibility (P/F) studies are critical to future RCT success. METHODS Kids-DOTT is a multicenter RCT investigating non-inferiority of a 6-week (shortened) vs. 3-month (conventional) duration of anticoagulation in patients <21 years old with provoked venous thrombosis. Primary efficacy and safety endpoints are symptomatic recurrent VTE at 1 year and anticoagulant-related, clinically-relevant bleeding. In the P/F phase, 100 participants were enrolled in an open, blinded endpoint, parallel-cohort RCT design. RESULTS No eligibility violations or randomization errors occurred. Of enrolled patients, 69% were randomized, 3% missed the randomization window, and 28% were followed in pre-specified observational cohorts for completely occlusive thrombosis or persistent antiphospholipid antibodies. Retention at 1 year was 82%. Inter-observer agreement between local vs. blinded central determination of venous occlusion by imaging at 6 weeks post-diagnosis was strong (κ-statistic=0.75; 95% confidence interval [CI] 0.48–1.0). Primary efficacy and safety event rates were 3.3% (95% CI 0.3–11.5%) and 1.4% (0.03–7.4%). CONCLUSIONS The P/F phase of Kids-DOTT has demonstrated validity of vascular imaging findings of occlusion as a randomization criterion, and defined randomization, retention, and endpoint rates to inform the fully-powered RCT. PMID:26118944

  19. Pelvic floor muscle training versus watchful waiting or pessary treatment for pelvic organ prolapse (POPPS): design and participant baseline characteristics of two parallel pragmatic randomized controlled trials in primary care.

    PubMed

    Wiegersma, Marian; Panman, Chantal M C R; Kollen, Boudewijn J; Vermeulen, Karin M; Schram, Aaltje J; Messelink, Embert J; Berger, Marjolein Y; Lisman-Van Leeuwen, Yvonne; Dekker, Janny H

    2014-02-01

    Pelvic floor muscle training (PFMT) and pessaries are commonly used in the conservative treatment of pelvic organ prolapse (POP). Because there is a lack of evidence regarding the optimal choice between these two interventions, we designed the "Pelvic Organ prolapse in primary care: effects of Pelvic floor muscle training and Pessary treatment Study" (POPPS). POPPS consists of two parallel open label randomized controlled trials performed in primary care, in women aged ≥55 years, recruited through a postal questionnaire. In POPPS trial 1, women with mild POP receive either PFMT or watchful waiting. In POPPS trial 2, women with advanced POP receive either PFMT or pessary treatment. Patient recruitment started in 2009 and was finished in December 2012. Primary outcome of both POPPS trials is improvement in POP-related symptoms. Secondary outcomes are quality of life, sexual function, POP-Q stage, pelvic floor muscle function, post-void residual volume, patients' perception of improvement, and costs. All outcomes are measured 3, 12, and 24 months after the start of treatment. Cost-effectiveness will be calculated based on societal costs, using the PFDI-20 and the EQ-5D as outcomes. In this paper the POPPS design, the encountered challenges and our solutions, and participant baseline characteristics are presented. For both trials the target numbers of patients in each treatment group are achieved, giving this study sufficient power to lead to promising results. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Rationale and design of a randomized, double-blind, parallel-group study of terutroban 30 mg/day versus aspirin 100 mg/day in stroke patients: the prevention of cerebrovascular and cardiovascular events of ischemic origin with terutroban in patients with a history of ischemic stroke or transient ischemic attack (PERFORM) study.

    PubMed

    Bousser, M G; Amarenco, P; Chamorro, A; Fisher, M; Ford, I; Fox, K; Hennerici, M G; Mattle, H P; Rothwell, P M

    2009-01-01

    Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event. Copyright 2009 S. Karger AG, Basel.

  1. The clinical efficacy of reminiscence therapy in patients with mild-to-moderate Alzheimer disease: Study protocol for a randomized parallel-design controlled trial.

    PubMed

    Li, Mo; Lyu, Ji-Hui; Zhang, Yi; Gao, Mao-Long; Li, Wen-Jie; Ma, Xin

    2017-12-01

    Alzheimer disease (AD) is one of the most common diseases among the older adults. Currently, various nonpharmacological interventions are used for the treatment of AD. Such as reminiscence therapy is being widely used in Western countries. However, it is often used as an empirical application in China; the evidence-based efficacy of reminiscence therapy in AD patients remains to be determined. Therefore, the aim of this research is to assess the effectives of reminiscence therapy for Chinese elderly. This is a randomized parallel-design controlled trial. Mild and moderate AD patients who are in the Beijing Geriatric Hospital, China will be randomized into control and intervention groups (n = 45 for each group). For the intervention group, along with conventional drug therapy, participants will be exposed to a reminiscence therapy of 35 to 45 minutes, 2 times/wk for 12 consecutive weeks. Patients in the control group will undergo conventional drug therapy only. The primary outcome measure will be the differences in Alzheimer disease Assessment Scale-Cognitive Section Score. The secondary outcome measures will be the differences in the Cornell scale for depression in dementia, Neuropsychiatric Inventory score, and Barthel Index scores at baseline, at 4 and 12 weeks of treatment, and 12 weeks after treatment. The protocols have been approved by the ethics committee of Beijing Geriatric Hospital of China (approval no. 2015-010). Findings will be disseminated through presentation at scientific conferences and in academic journals. Chinese Clinical Trial Registry identifier ChiCTR-INR-16009505. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  2. Dietary supplementation with rice bran fermented with Lentinus edodes increases interferon-γ activity without causing adverse effects: a randomized, double-blind, placebo-controlled, parallel-group study.

    PubMed

    Choi, Ji-Young; Paik, Doo-Jin; Kwon, Dae Young; Park, Yongsoon

    2014-04-22

    The purpose of this study was to investigate the hypothesis that dietary supplementation with rice bran fermented with Lentinus edodes (rice bran exo-biopolymer, RBEP), a substance known to contain arabinoxylan, enhances natural killer (NK) cell activity and modulates cytokine production in healthy adults. This study was designed in a randomized, double-blind, placebo-controlled, and parallel-group format. Eighty healthy participants with white blood cell counts of 4,000-8,000 cells/μL were randomly assigned to take six capsules per day of either 3 g RBEP or 3 g placebo for 8 weeks. Three participants in the placebo group were excluded after initiation of the protocol; no severe adverse effects from RBEP supplementation were reported. NK cell activity of peripheral blood mononuclear cells was measured using nonradioactive cytotoxicity assay kits and serum cytokine concentrations included interferon (IFN)-γ, tumor necrosis factor (TNF)-α, interleukin (IL)-2, IL-4, IL-10, and IL-12 were measured by Bio-Plex cytokine assay kit. This study was registered with the Clinical Research Information Service (KCT0000536). Supplementation of RBEP significantly increased IFN-γ production compared with the placebo group (P = 0.012). However, RBEP supplementation did not affect either NK cell activity or cytokine levels, including IL-2, IL-4, IL-10, IL-12, and TNF-α, compared with the placebo group. The data obtained in this study indicate that RBEP supplementation increases IFN-γ secretion without causing significant adverse effects, and thus may be beneficial to healthy individuals. This new rice bran-derived product may therefore be potentially useful to include in the formulation of solid and liquid foods designed for treatment and prevention of pathological states associated with defective immune responses.

  3. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  4. A Randomized, Rater-Blinded, Parallel Trial of Intensive Speech Therapy in Sub-Acute Post-Stroke Aphasia: The SP-I-R-IT Study

    ERIC Educational Resources Information Center

    Martins, Isabel Pavao; Leal, Gabriela; Fonseca, Isabel; Farrajota, Luisa; Aguiar, Marta; Fonseca, Jose; Lauterbach, Martin; Goncalves, Luis; Cary, M. Carmo; Ferreira, Joaquim J.; Ferro, Jose M.

    2013-01-01

    Background: There is conflicting evidence regarding the benefits of intensive speech and language therapy (SLT), particularly because intensity is often confounded with total SLT provided. Aims: A two-centre, randomized, rater-blinded, parallel study was conducted to compare the efficacy of 100 h of SLT in a regular (RT) versus intensive (IT)…

  5. VLSI Design, Parallel Computation and Distributed Computing

    DTIC Science & Technology

    1991-09-30

    I U1 TA 3 Daniel Mleitman U. : C ..( -_. .. .s .. . . . . Tom Leighton David Shmoys . ........A ,~i ;.t , 77 Michael Sipser , Di.,t a-., Eva Tardos...Leighton and Plaxton on the construction of a sim- ple c log .- depth circuit (where c < 7.5) that sorts a random permutation with very high probability...puting iPOD( ). Aug-ust 1992. Vancouver. British Columbia (to appear). 20. B 1Xti~ c .. U(.ii. 1. Gopal. M. [Kaplan and S. Kutten, "Distributed Control for

  6. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments

    PubMed Central

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    Motivation: With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. Results: We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. Availability and implementation: MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator. The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. Contact: igs@sanger.ac.uk or mh26@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27605100

  7. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments.

    PubMed

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. igs@sanger.ac.uk or mh26@sanger.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  8. Evaluation of Tai Chi Yunshou exercises on community-based stroke patients with balance dysfunction: a study protocol of a cluster randomized controlled trial.

    PubMed

    Tao, Jing; Rao, Ting; Lin, Lili; Liu, Wei; Wu, Zhenkai; Zheng, Guohua; Su, Yusheng; Huang, Jia; Lin, Zhengkun; Wu, Jinsong; Fang, Yunhua; Chen, Lidian

    2015-02-25

    Balance dysfunction after stroke limits patients' general function and participation in daily life. Previous researches have suggested that Tai Chi exercise could offer a positive improvement in older individuals' balance function and reduce the risk of falls. But convincing evidence for the effectiveness of enhancing balance function after stroke with Tai Chi exercise is still inadequate. Considering the difficulties for stroke patients to complete the whole exercise, the current trial evaluates the benefit of Tai Chi Yunshou exercise for patients with balance dysfunction after stroke through a cluster randomization, parallel-controlled design. A single-blind, cluster-randomized, parallel-controlled trial will be conducted. A total of 10 community health centers (5 per arm) will be selected and randomly allocated into Tai Chi Yunshou exercise group or balance rehabilitation training group. Each community health centers will be asked to enroll 25 eligible patients into the trial. 60 minutes per each session, 1 session per day, 5 times per week and the total training round is 12 weeks. Primary and secondary outcomes will be measured at baseline and 4-weeks, 8-weeks, 12-weeks, 6-week follow-up, 12-week follow-up after randomization. Safety and economic evaluation will also be assessed. This protocol aims to evaluate the effectiveness of Tai Chi Yunshou exercise for the balance function of patients after stroke. If the outcome is positive, this project will provide an appropriate and economic balance rehabilitation technology for community-based stroke patients. Chinese Clinical Trial Registry: ChiCTR-TRC-13003641. Registration date: 22 August, 2013 http://www.chictr.org/usercenter/project/listbycreater.aspx .

  9. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  10. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    PubMed

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  11. Rhodiola rosea therapy for major depressive disorder: a study protocol for a randomized, double-blind, placebo- controlled trial

    PubMed Central

    Mao, Jun J; Li, Qing S.; Soeller, Irene; Xie, Sharon X; Amsterdam, Jay D.

    2014-01-01

    Background Rhodiola rosea (R. rosea), a botanical of both western and traditional Chinese medicine, has been used as a folk remedy for improving stamina and reducing stress. However, few controlled clinical trials have examined the safety and efficacy of R. rosea for the treatment of major depressive disorder (MDD). This study seeks to evaluate the safety and efficacy of R. rosea in a 12-week, randomized, double-blind, placebo-controlled, parallel group study design. Methods / Design Subjects with MDD not receiving antidepressant therapy will be randomized to either R. rosea extract 340–1,360 mg daily; sertraline 50–200 mg daily, or placebo for 12 weeks. The primary outcome measure will be change over time in the mean 17-item Hamilton Depression Rating score. Secondary outcome measures will include safety and quality of life ratings. Statistical procedures will include mixed-effects models to assess efficacy for primary and secondary outcomes. Discussion This study will provide valuable preliminary information on the safety and efficacy data of R. rosea versus conventional antidepressant therapy of MDD. It will also inform additional hypotheses and study design of future, fully powered, phase III clinical trials with R. rosea to determine its safety and efficacy in MDD. PMID:25610752

  12. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    PubMed Central

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  13. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    PubMed

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  14. Performance Evaluation of Parallel Branch and Bound Search with the Intel iPSC (Intel Personal SuperComputer) Hypercube Computer.

    DTIC Science & Technology

    1986-12-01

    17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design

  15. Comparison of intra-articular injections of hyaluronic acid and corticosteroid in the treatment of osteoarthritis of the hip in comparison with intra-articular injections of bupivacaine. Design of a prospective, randomized, controlled study with blinding of the patients and outcome assessors.

    PubMed

    Colen, Sascha; van den Bekerom, Michel P J; Bellemans, Johan; Mulier, Michiel

    2010-11-16

    Although intra-articular hyaluronic acid is well established as a treatment for osteoarthritis of the knee, its use in hip osteoarthritis is not based on large randomized controlled trials. There is a need for more rigorously designed studies on hip osteoarthritis treatment as this subject is still very much under debate. Randomized, controlled trial with a three-armed, parallel-group design. Approximately 315 patients complying with the inclusion and exclusion criteria will be randomized into one of the following treatment groups: infiltration of the hip joint with hyaluronic acid, with a corticosteroid or with 0.125% bupivacaine.The following outcome measure instruments will be assessed at baseline, i.e. before the intra-articular injection of one of the study products, and then again at six weeks, 3 and 6 months after the initial injection: Pain (100 mm VAS), Harris Hip Score and HOOS, patient assessment of their clinical status (worse, stable or better then at the time of enrollment) and intake of pain rescue medication (number per week). In addition patients will be asked if they have complications/adverse events. The six-month follow-up period for all patients will begin on the date the first injection is administered. This randomized, controlled, three-arm study will hopefully provide robust information on two of the intra-articular treatments used in hip osteoarthritis, in comparison to bupivacaine. NCT01079455.

  16. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  17. Reporting of participant flow diagrams in published reports of randomized trials.

    PubMed

    Hopewell, Sally; Hirst, Allison; Collins, Gary S; Mallett, Sue; Yu, Ly-Mee; Altman, Douglas G

    2011-12-05

    Reporting of the flow of participants through each stage of a randomized trial is essential to assess the generalisability and validity of its results. We assessed the type and completeness of information reported in CONSORT (Consolidated Standards of Reporting Trials) flow diagrams published in current reports of randomized trials. A cross sectional review of all primary reports of randomized trials which included a CONSORT flow diagram indexed in PubMed core clinical journals (2009). We assessed the proportion of parallel group trial publications reporting specific items recommended by CONSORT for inclusion in a flow diagram. Of 469 primary reports of randomized trials, 263 (56%) included a CONSORT flow diagram of which 89% (237/263) were published in a CONSORT endorsing journal. Reports published in CONSORT endorsing journals were more likely to include a flow diagram (62%; 237/380 versus 29%; 26/89). Ninety percent (236/263) of reports which included a flow diagram had a parallel group design, of which 49% (116/236) evaluated drug interventions, 58% (137/236) were multicentre, and 79% (187/236) compared two study groups, with a median sample size of 213 participants. Eighty-one percent (191/236) reported the overall number of participants assessed for eligibility, 71% (168/236) the number excluded prior to randomization and 98% (231/236) the overall number randomized. Reasons for exclusion prior to randomization were more poorly reported. Ninety-four percent (223/236) reported the number of participants allocated to each arm of the trial. However, only 40% (95/236) reported the number who actually received the allocated intervention, 67% (158/236) the number lost to follow up in each arm of the trial, 61% (145/236) whether participants discontinued the intervention during the trial and 54% (128/236) the number included in the main analysis. Over half of published reports of randomized trials included a diagram showing the flow of participants through the trial. However, information was often missing from published flow diagrams, even in articles published in CONSORT endorsing journals. If important information is not reported it can be difficult and sometimes impossible to know if the conclusions of that trial are justified by the data presented.

  18. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  19. Stability of tapered and parallel-walled dental implants: A systematic review and meta-analysis.

    PubMed

    Atieh, Momen A; Alsabeeha, Nabeel; Duncan, Warwick J

    2018-05-15

    Clinical trials have suggested that dental implants with a tapered configuration have improved stability at placement, allowing immediate placement and/or loading. The aim of this systematic review and meta-analysis was to evaluate the implant stability of tapered dental implants compared to standard parallel-walled dental implants. Applying the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement, randomized controlled trials (RCTs) were searched for in electronic databases and complemented by hand searching. The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool and data were analyzed using statistical software. A total of 1199 studies were identified, of which, five trials were included with 336 dental implants in 303 participants. Overall meta-analysis showed that tapered dental implants had higher implant stability values than parallel-walled dental implants at insertion and 8 weeks but the difference was not statistically significant. Tapered dental implants had significantly less marginal bone loss compared to parallel-walled dental implants. No significant differences in implant failure rate were found between tapered and parallel-walled dental implants. There is limited evidence to demonstrate the effectiveness of tapered dental implants in achieving greater implant stability compared to parallel-walled dental implants. Superior short-term results in maintaining peri-implant marginal bone with tapered dental implants are possible. Further properly designed RCTs are required to endorse the supposed advantages of tapered dental implants in immediate loading protocol and other complex clinical scenarios. © 2018 Wiley Periodicals, Inc.

  20. Separating the Laparoscopic Camera Cord From the Monopolar "Bovie" Cord Reduces Unintended Thermal Injury From Antenna Coupling: A Randomized Controlled Trial.

    PubMed

    Robinson, Thomas N; Jones, Edward L; Dunn, Christina L; Dunne, Bruce; Johnson, Elizabeth; Townsend, Nicole T; Paniccia, Alessandro; Stiegmann, Greg V

    2015-06-01

    The monopolar "Bovie" is used in virtually every laparoscopic operation. The active electrode and its cord emit radiofrequency energy that couples (or transfers) to nearby conductive material without direct contact. This phenomenon is increased when the active electrode cord is oriented parallel to another wire/cord. The parallel orientation of the "Bovie" and laparoscopic camera cords cause transfer of energy to the camera cord resulting in cutaneous burns at the camera trocar incision. We hypothesized that separating the active electrode/camera cords would reduce thermal injury occurring at the camera trocar incision in comparison to parallel oriented active electrode/camera cords. In this prospective, blinded, randomized controlled trial, patients undergoing standardized laparoscopic cholecystectomy were randomized to separated active electrode/camera cords or parallel oriented active electrode/camera cords. The primary outcome variable was thermal injury determined by histology from skin biopsied at the camera trocar incision. Eighty-four patients participated. Baseline demographics were similar in the groups for age, sex, preoperative diagnosis, operative time, and blood loss. Thermal injury at the camera trocar incision was lower in the separated versus parallel group (31% vs 57%; P = 0.027). Separation of the laparoscopic camera cord from the active electrode cord decreases thermal injury from antenna coupling at the camera trocar incision in comparison to the parallel orientation of these cords. Therefore, parallel orientation of these cords (an arrangement promoted by integrated operating rooms) should be abandoned. The findings of this study should influence the operating room setup for all laparoscopic cases.

  1. Generating Billion-Edge Scale-Free Networks in Seconds: Performance Study of a Novel GPU-based Preferential Attachment Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.; Alam, Maksudul

    A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidiamore » GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.« less

  2. A Proposed Solution to the Problem with Using Completely Random Data to Assess the Number of Factors with Parallel Analysis

    ERIC Educational Resources Information Center

    Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo

    2012-01-01

    A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…

  3. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  4. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    PubMed

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  5. HPCC Methodologies for Structural Design and Analysis on Parallel and Distributed Computing Platforms

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1998-01-01

    In this grant, we have proposed a three-year research effort focused on developing High Performance Computation and Communication (HPCC) methodologies for structural analysis on parallel processors and clusters of workstations, with emphasis on reducing the structural design cycle time. Besides consolidating and further improving the FETI solver technology to address plate and shell structures, we have proposed to tackle the following design related issues: (a) parallel coupling and assembly of independently designed and analyzed three-dimensional substructures with non-matching interfaces, (b) fast and smart parallel re-analysis of a given structure after it has undergone design modifications, (c) parallel evaluation of sensitivity operators (derivatives) for design optimization, and (d) fast parallel analysis of mildly nonlinear structures. While our proposal was accepted, support was provided only for one year.

  6. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  7. Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.

    PubMed

    Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M

    2010-11-01

    Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.

  8. Draco,Version 6.x.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Kelly; Budge, Kent; Lowrie, Rob

    2016-03-03

    Draco is an object-oriented component library geared towards numerically intensive, radiation (particle) transport applications built for parallel computing hardware. It consists of semi-independent packages and a robust build system. The packages in Draco provide a set of components that can be used by multiple clients to build transport codes. The build system can also be extracted for use in clients. Software includes smart pointers, Design-by-Contract assertions, unit test framework, wrapped MPI functions, a file parser, unstructured mesh data structures, a random number generator, root finders and an angular quadrature component.

  9. White Matter Microstructural Correlates of Superior Long-term Skill Gained Implicitly under Randomized Practice

    PubMed Central

    Song, Sunbin; Sharma, Nikhil; Buch, Ethan R.

    2012-01-01

    We value skills we have learned intentionally, but equally important are skills acquired incidentally without ability to describe how or what is learned, referred to as implicit. Randomized practice schedules are superior to grouped schedules for long-term skill gained intentionally, but its relevance for implicit learning is not known. In a parallel design, we studied healthy subjects who learned a motor sequence implicitly under randomized or grouped practice schedule and obtained diffusion-weighted images to identify white matter microstructural correlates of long-term skill. Randomized practice led to superior long-term skill compared with grouped practice. Whole-brain analyses relating interindividual variability in fractional anisotropy (FA) to long-term skill demonstrated that 1) skill in randomized learners correlated with FA within the corticostriatal tract connecting left sensorimotor cortex to posterior putamen, while 2) skill in grouped learners correlated with FA within the right forceps minor connecting homologous regions of the prefrontal cortex (PFC) and the corticostriatal tract connecting lateral PFC to anterior putamen. These results demonstrate first that randomized practice schedules improve long-term implicit skill more than grouped practice schedules and, second, that the superior skill acquired through randomized practice can be related to white matter microstructure in the sensorimotor corticostriatal network. PMID:21914632

  10. The studies of FT-IR and CD spectroscopy on catechol oxidase I from tobacco

    NASA Astrophysics Data System (ADS)

    Xiao, Hourong; Xie, Yongshu; Liu, Qingliang; Xu, Xiaolong; Shi, Chunhua

    2005-10-01

    A novel copper-containing enzyme named COI (catechol oxidase I) has been isolated and purified from tobacco by extracting acetone-emerged powder with phosphate buffer, centrifugation at low temperature, ammonium sulfate fractional precipitation, and column chromatography on DEAE-sephadex (A-50), sephadex (G-75), and DEAE-celluse (DE-52). PAGE, SDS-PAGE were used to detect the enzyme purity, and to determine its molecular weight. Then the secondary structures of COI at different pH, different temperatures and different concentrations of guanidine hydrochloride (GdnHCl) were studied by the FT-IR, Fourier self-deconvolution spectra, and circular dichroism (CD). At pH 2.0, the contents of both α-helix and anti-parallel β-sheet decrease, and that of random coil increases, while β-turn is unchanged compared with the neutral condition (pH 7.0). At pH 11.0, the results indicate that the contents of α-helix, anti-parallel β-sheet and β-turn decrease, while random coil structure increases. According to the CD measurements, the relative average fractions of α-helix, anti-parallel β-sheet, β-turn/parallel β-sheet, aromatic residues and disulfide bond, and random coil/γ-turn are 41.7%, 16.7%, 23.5%, 11.3%, and 6.8% at pH 7.0, respectively, while 7.2%, 7.7%, 15.2%, 10.7%, 59.2% at pH 2.0, and 20.6%, 9.5%, 15.2%, 10.5%, 44.2% at pH 11.0. Both α-helix and random coil decrease with temperature increasing, and anti-parallel β-sheet increases at the same time. After incubated in 6 mol/L guanidine hydrochloride for 30 min, the fraction of α-helix almost disappears (only 1.1% left), while random coil/γ-turn increases to 81.8%, which coincides well with the results obtained through enzymatic activity experiment.

  11. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  12. Designing and conducting a randomized trial for pandemic critical illness: the 2009 H1N1 influenza pandemic.

    PubMed

    Annane, Djillali; Antona, Marion; Lehmann, Blandine; Kedzia, Cecile; Chevret, Sylvie

    2012-01-01

    To analyze the hurdles in implementing a randomized trial of corticosteroids for severe 2009 H1N1 influenza infections. This was an investigator-led, multicenter, randomized, placebo-controlled, double-blind trial of corticosteroids in ICU patients with 2009 H1N1 influenza pneumonia requiring mechanical ventilation. The feasibility of and hurdles in designing and initiating a phase III trial in a short-lived pandemic crisis were analyzed. The regulatory agency and ethics committee approved the study's scientific, financial, and ethical aspects within 4 weeks. Hydrocortisone and placebo were prepared centrally and shipped to participating hospitals within 6 weeks. The inclusion period started on November 9, 2009. From August 1, 2009 to March 8, 2010, only 205/224 ICU patients with H1N1 infections required mechanical ventilation. The peak of the wave was missed by 2-3 weeks and only 26 patients were randomized. The two main reasons for non-inclusion were patients' admission before the beginning of the trial and ICU personnel overwhelmed by clinical duties. Parallel rather than sequential regulatory and ethics approval, and preparation and masking of study drugs by local pharmacists would have allowed the study to start 1 month earlier and before the peak of the "flu" wave. A dedicated research team in each participating center would have increased the ratio of screened to randomized patients. This report highlights the main hurdles in implementing a randomized trial for a pandemic critical illness and proposes solutions for future trials.

  13. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by semi-randomly varying routing policies for different packets

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-23

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Nodes vary a choice of routing policy for routing data in the network in a semi-random manner, so that similarly situated packets are not always routed along the same path. Semi-random variation of the routing policy tends to avoid certain local hot spots of network activity, which might otherwise arise using more consistent routing determinations. Preferably, the originating node chooses a routing policy for a packet, and all intermediate nodes in the path route the packet according to that policy. Policies may be rotated on a round-robin basis, selected by generating a random number, or otherwise varied.

  14. Prospective Randomized Phase II Parallel Study of Vinorelbine Maintenance Therapy versus Best Supportive Care in Advanced Non-Small Cell Lung Cancer.

    PubMed

    Khosravi, Adnan; Esfahani-Monfared, Zahra; Seifi, Sharareh; Khodadad, Kian

    2017-01-01

    Maintenance strategy has been used to improve survival in non-small cell lung cancer (NSCLC). We investigated whether switch maintenance therapy with vinorelbine improved progression free survival (PFS) after first-line chemotherapy with gemcitabine plus carboplatin. In this single blind, parallel, phase 2, randomized trial, patients with NSCLC pathology, age >18 years, Eastern Cooperative Oncology Group (ECOG) performance status (PS) score of 0-2, and advanced stage (IIIB and IV) were treated with up to 6 cycles of gemcitabine 1250 mg/m 2 (day 1 and 8) plus carboplatin AUC 5 (day 1) every 3 weeks. Patients who did not show progression after first-line chemotherapy were randomly assigned to receive switch maintenance with vinorelbine (25 mg/m 2 , day 1, 15) or the best supportive care until disease progression. A total of 100 patients were registered, of whom 34 had a non-progressive response to first-line chemotherapy and randomly received maintenance vinorelbine (n=19) or best supportive care (n=15). The hazard ratio of PFS in the vinorelbine group relative to the best supportive care group was 1.097 (95% confidence interval = 0.479-2.510; P-value =0.827). There was no significant difference between the overall survival for the two groups (P=0.068). Switch maintenance strategies are beneficial, but defining the right candidates for treatment is a problem. Moreover, the trial designs do not always reflect the real-world considerations. Switch maintenance therapy with vinorelbine, though had tolerable toxicity, did not improve PFS in patients with NSCLC. Therefore, other agents should be considered in this setting.

  15. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  16. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  17. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  18. Pre-operative use of dexamethasone does not reduce incidence or intensity of bleaching-induced tooth sensitivity. A triple-blind, parallel-design, randomized clinical trial.

    PubMed

    da Costa Poubel, Luiz Augusto; de Gouvea, Cresus Vinicius Deppes; Calazans, Fernanda Signorelli; Dip, Etyene Castro; Alves, Wesley Veltri; Marins, Stella Soares; Barcelos, Roberta; Barceleiro, Marcos Oliveira

    2018-04-25

    This study evaluated the effect of the administration of pre-operative dexamethasone on tooth sensitivity stemming from in-office bleaching. A triple-blind, parallel-design, randomized clinical trial was conducted on 70 volunteers who received dexamethasone or placebo capsules. The drugs were administered in a protocol of three daily 8-mg doses of the drug, starting 48 h before the in-office bleaching treatment. Two bleaching sessions with 37.5% hydrogen peroxide gel were performed with a 1-week interval. Tooth sensitivity (TS) was recorded on visual analog scales (VAS) and numeric rating scales (NRS) in different periods up to 48 h after bleaching. The color evaluations were also performed. The absolute risk of TS and its intensity were evaluated by using Fisher's exact test. Comparisons of the TS intensity (NRS and VAS data) were performed by using the Mann-Whitney U test and a two-way repeated measures ANOVA and Tukey's test, respectively. In both groups, a high risk of TS (Dexa 80% x Placebo 94%) was detected. No significant difference was observed in terms of TS intensity. A whitening of approximately 3 shade guide units of the VITA Classical was detected in both groups, which were statistically similar. It was concluded that the administration pre-operatively of dexamethasone, in the proposed protocol, does not reduce the incidence or intensity of bleaching-induced tooth sensitivity. The use of dexamethasone drug before in-office bleaching treatment does not reduce incidence or intensity of tooth sensitivity. NCT02956070.

  19. Catch and Patch: A Pipette-Based Approach for Automating Patch Clamp That Enables Cell Selection and Fast Compound Application.

    PubMed

    Danker, Timm; Braun, Franziska; Silbernagl, Nikole; Guenther, Elke

    2016-03-01

    Manual patch clamp, the gold standard of electrophysiology, represents a powerful and versatile toolbox to stimulate, modulate, and record ion channel activity from membrane fragments and whole cells. The electrophysiological readout can be combined with fluorescent or optogenetic methods and allows for ultrafast solution exchanges using specialized microfluidic tools. A hallmark of manual patch clamp is the intentional selection of individual cells for recording, often an essential prerequisite to generate meaningful data. So far, available automation solutions rely on random cell usage in the closed environment of a chip and thus sacrifice much of this versatility by design. To parallelize and automate the traditional patch clamp technique while perpetuating the full versatility of the method, we developed an approach to automation, which is based on active cell handling and targeted electrode placement rather than on random processes. This is achieved through an automated pipette positioning system, which guides the tips of recording pipettes with micrometer precision to a microfluidic cell handling device. Using a patch pipette array mounted on a conventional micromanipulator, our automated patch clamp process mimics the original manual patch clamp as closely as possible, yet achieving a configuration where recordings are obtained from many patch electrodes in parallel. In addition, our implementation is extensible by design to allow the easy integration of specialized equipment such as ultrafast compound application tools. The resulting system offers fully automated patch clamp on purposely selected cells and combines high-quality gigaseal recordings with solution switching in the millisecond timescale.

  20. Promoting a Positive Middle School Transition: A Randomized-Controlled Treatment Study Examining Self-Concept and Self-Esteem.

    PubMed

    Coelho, Vitor Alexandre; Marchante, Marta; Jimerson, Shane R

    2017-03-01

    The middle school transition is a salient developmental experience impacting adolescents around the world. This study employed a randomized-controlled treatment design, with randomization at the school level, to investigate the impact of a school adjustment program for middle school transition and potential gender differences. Participants included 1147 students (M age  = 9.62; SD = 0.30, 45.7 % girls), who were assessed at four time points during the transition, regarding five dimensions of self-concept (academic, social, emotional, physical and family) and self-esteem. Parallel growth curves were employed to analyze the evolution of self-concept. Following the transition to middle school, students reported lower levels of self-concept (academic, emotional and physical) and self-esteem, while participation in the intervention led to increases in self-esteem and gains in social self-concept. No gender differences were found. These results provide preliminary evidence supporting such interventions in early middle school transitions.

  1. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  2. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  3. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  4. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  5. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  6. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  7. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  8. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    PubMed

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. A sweep algorithm for massively parallel simulation of circuit-switched networks

    NASA Technical Reports Server (NTRS)

    Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.

    1992-01-01

    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.

  10. Reporting of participant flow diagrams in published reports of randomized trials

    PubMed Central

    2011-01-01

    Background Reporting of the flow of participants through each stage of a randomized trial is essential to assess the generalisability and validity of its results. We assessed the type and completeness of information reported in CONSORT (Consolidated Standards of Reporting Trials) flow diagrams published in current reports of randomized trials. Methods A cross sectional review of all primary reports of randomized trials which included a CONSORT flow diagram indexed in PubMed core clinical journals (2009). We assessed the proportion of parallel group trial publications reporting specific items recommended by CONSORT for inclusion in a flow diagram. Results Of 469 primary reports of randomized trials, 263 (56%) included a CONSORT flow diagram of which 89% (237/263) were published in a CONSORT endorsing journal. Reports published in CONSORT endorsing journals were more likely to include a flow diagram (62%; 237/380 versus 29%; 26/89). Ninety percent (236/263) of reports which included a flow diagram had a parallel group design, of which 49% (116/236) evaluated drug interventions, 58% (137/236) were multicentre, and 79% (187/236) compared two study groups, with a median sample size of 213 participants. Eighty-one percent (191/236) reported the overall number of participants assessed for eligibility, 71% (168/236) the number excluded prior to randomization and 98% (231/236) the overall number randomized. Reasons for exclusion prior to randomization were more poorly reported. Ninety-four percent (223/236) reported the number of participants allocated to each arm of the trial. However, only 40% (95/236) reported the number who actually received the allocated intervention, 67% (158/236) the number lost to follow up in each arm of the trial, 61% (145/236) whether participants discontinued the intervention during the trial and 54% (128/236) the number included in the main analysis. Conclusions Over half of published reports of randomized trials included a diagram showing the flow of participants through the trial. However, information was often missing from published flow diagrams, even in articles published in CONSORT endorsing journals. If important information is not reported it can be difficult and sometimes impossible to know if the conclusions of that trial are justified by the data presented. PMID:22141446

  11. Effect of alignment of easy axes on dynamic magnetization of immobilized magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yoshida, Takashi; Matsugi, Yuki; Tsujimura, Naotaka; Sasayama, Teruyoshi; Enpuku, Keiji; Viereck, Thilo; Schilling, Meinhard; Ludwig, Frank

    2017-04-01

    In some biomedical applications of magnetic nanoparticles (MNPs), the particles are physically immobilized. In this study, we explore the effect of the alignment of the magnetic easy axes on the dynamic magnetization of immobilized MNPs under an AC excitation field. We prepared three immobilized MNP samples: (1) a sample in which easy axes are randomly oriented, (2) a parallel-aligned sample in which easy axes are parallel to the AC field, and (3) an orthogonally aligned sample in which easy axes are perpendicular to the AC field. First, we show that the parallel-aligned sample has the largest hysteresis in the magnetization curve and the largest harmonic magnetization spectra, followed by the randomly oriented and orthogonally aligned samples. For example, 1.6-fold increase was observed in the area of the hysteresis loop of the parallel-aligned sample compared to that of the randomly oriented sample. To quantitatively discuss the experimental results, we perform a numerical simulation based on a Fokker-Planck equation, in which probability distributions for the directions of the easy axes are taken into account in simulating the prepared MNP samples. We obtained quantitative agreement between experiment and simulation. These results indicate that the dynamic magnetization of immobilized MNPs is significantly affected by the alignment of the easy axes.

  12. A repeated measures model for analysis of continuous outcomes in sequential parallel comparison design studies.

    PubMed

    Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio

    2013-07-20

    Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  14. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  15. Treatment of early-onset schizophrenia spectrum disorders (TEOSS): rationale, design, and methods.

    PubMed

    McClellan, Jon; Sikich, Linmarie; Findling, Robert L; Frazier, Jean A; Vitiello, Benedetto; Hlastala, Stefanie A; Williams, Emily; Ambler, Denisse; Hunt-Harrison, Tyehimba; Maloney, Ann E; Ritz, Louise; Anderson, Robert; Hamer, Robert M; Lieberman, Jeffrey A

    2007-08-01

    The Treatment of Early Onset Schizophrenia Spectrum Disorders Study is a publicly funded clinical trial designed to compare the therapeutic benefits, safety, and tolerability of risperidone, olanzapine, and molindone in youths with early-onset schizophrenia spectrum disorders. The rationale, design, and methods of the Treatment of Early Onset Schizophrenia Spectrum Disorders Study are described. Using a randomized, double-blind, parallel-group design at four sites, youths with EOSS (ages 8-19 years) were assigned to an 8-week acute trial of risperidone (0.5-6.0 mg/day), olanzapine (2.5-20 mg/day), or molindone (10-140 mg/day). Responders continued double-blind treatment for 44 weeks. The primary outcome measure was responder status at 8 weeks, defined by a 20% reduction in baseline Positive and Negative Symptom Scale scores plus ratings of significant improvement on the Clinical Global Impressions. Secondary outcome measures included assessments of psychopathology, functional impairment, quality of life, and medication safety. An intent-to-treat analytic plan was used. From February 2002 to May 2006, 476 youths were screened, 173 were further evaluated, and 119 were randomized. Several significant study modifications were required to address safety, the use of adjunctive medications, and the termination of the olanzapine treatment arm due to weight gain. The Treatment of Early Onset Schizophrenia Spectrum Disorders Study will inform clinical practice regarding the use of antipsychotic medications for youths with early-onset schizophrenia spectrum disorders. Important safety concerns emerged during the study, including higher than anticipated rates of suicidality and problems tapering thymoleptic agents before randomization.

  16. Deducing trapdoor primitives in public key encryption schemes

    NASA Astrophysics Data System (ADS)

    Pandey, Chandra

    2005-03-01

    Semantic security of public key encryption schemes is often interchangeable with the art of building trapdoors. In the frame of reference of Random Oracle methodology, the "Key Privacy" and "Anonymity" has often been discussed. However to a certain degree the security of most public key encryption schemes is required to be analyzed with formal proofs using one-way functions. This paper evaluates the design of El Gamal and RSA based schemes and attempts to parallelize the trapdoor primitives used in the computation of the cipher text, thereby magnifying the decryption error δp in the above schemes.

  17. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  18. Evaluation of piezocision and laser-assisted flapless corticotomy in the acceleration of canine retraction: a randomized controlled trial.

    PubMed

    Alfawal, Alaa M H; Hajeer, Mohammad Y; Ajaj, Mowaffak A; Hamadah, Omar; Brad, Bassel

    2018-02-17

    To evaluate the effectiveness of two minimally invasive surgical procedures in the acceleration of canine retraction: piezocision and laser-assisted flapless corticotomy (LAFC). Trial design: A single-centre randomized controlled trial with a compound design (two-arm parallel-group design and a split-mouth design for each arm). 36 Class II division I patients (12 males, 24 females; age range: 15 to 27 years) requiring first upper premolars extraction followed by canine retraction. piezocision group (PG; n = 18) and laser-assisted flapless corticotomy group (LG; n = 18). A split-mouth design was applied for each group where the flapless surgical intervention was randomly allocated to one side and the other side served as a control side. the rate of canine retraction (primary outcome), anchorage loss and canine rotation, which were assessed at 1, 2, 3 and 4 months following the onset of canine retraction. Also the duration of canine retraction was recorded. Random sequence: Computer-generated random numbers. Allocation concealment: sequentially numbered, opaque, sealed envelopes. Blinding: Single blinded (outcomes' assessor). Seventeen patients in each group were enrolled in the statistical analysis. The rate of canine retraction was significantly greater in the experimental side than in the control side in both groups by two-fold in the first month and 1.5-fold in the second month (p < 0.001). Also the overall canine retraction duration was significantly reduced in the experimental side as compared with control side in both groups about 25% (p ≤ 0.001). There were no significant differences between the experimental and the control sides regarding loss of anchorage and upper canine rotation in both groups (p > 0.05). There were no significant differences between the two flapless techniques regarding the studied variables during all evaluation times (p > 0.05). Piezocision and laser-assisted flapless corticotomy appeared to be effective treatment methods for accelerating canine retraction without any significant untoward effect on anchorage or canine rotation during rapid retraction. ClinicalTrials.gov (Identifier: NCT02606331 ).

  19. Acute cholecystitis in high risk surgical patients: percutaneous cholecystostomy versus laparoscopic cholecystectomy (CHOCOLATE trial): Study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Laparoscopic cholecystectomy in acute calculous cholecystitis in high risk patients can lead to significant morbidity and mortality. Percutaneous cholecystostomy may be an alternative treatment option but the current literature does not provide the surgical community with evidence based advice. Methods/Design The CHOCOLATE trial is a randomised controlled, parallel-group, superiority multicenter trial. High risk patients, defined as APACHE-II score 7-14, with acute calculous cholecystitis will be randomised to laparoscopic cholecystectomy or percutaneous cholecystostomy. During a two year period 284 patients will be enrolled from 30 high volume teaching hospitals. The primary endpoint is a composite endpoint of major complications within three months following randomization and need for re-intervention and mortality during the follow-up period of one year. Secondary endpoints include all other complications, duration of hospital admission, difficulty of procedures and total costs. Discussion The CHOCOLATE trial is designed to provide the surgical community with an evidence based guideline in the treatment of acute calculous cholecystitis in high risk patients. Trial Registration Netherlands Trial Register (NTR): NTR2666 PMID:22236534

  20. Rationale and design of the SERVE-HF study: treatment of sleep-disordered breathing with predominant central sleep apnoea with adaptive servo-ventilation in patients with chronic heart failure.

    PubMed

    Cowie, Martin R; Woehrle, Holger; Wegscheider, Karl; Angermann, Christiane; d'Ortho, Marie-Pia; Erdmann, Erland; Levy, Patrick; Simonds, Anita; Somers, Virend K; Zannad, Faiez; Teschler, Helmut

    2013-08-01

    Central sleep apnoea/Cheyne-Stokes respiration (CSA/CSR) is a risk factor for increased mortality and morbidity in heart failure (HF). Adaptive servo-ventilation (ASV) is a non-invasive ventilation modality for the treatment of CSA/CSR in patients with HF. SERVE-HF is a multinational, multicentre, randomized, parallel trial designed to assess the effects of addition of ASV (PaceWave, AutoSet CS; ResMed) to optimal medical management compared with medical management alone (control group) in patients with symptomatic chronic HF, LVEF ≤45%, and predominant CSA. The trial is based on an event-driven group sequential design, and the final analysis will be performed when 651 events have been observed or the study is terminated at one of the two interim analyses. The aim is to randomize ∼1200 patients to be followed for a minimum of 2 years. Patients are to stay in the trial up to study termination. The first patient was randomized in February 2008 and the study is expected to end mid 2015. The primary combined endpoint is the time to first event of all-cause death, unplanned hospitalization (or unplanned prolongation of a planned hospitalization) for worsening (chronic) HF, cardiac transplantation, resuscitation of sudden cardiac arrest, or appropriate life-saving shock for ventricular fibrillation or fast ventricular tachycardia in implantable cardioverter defibrillator patients. The SERVE-HF study is a randomized study that will provide important data on the effect of treatment with ASV on morbidity and mortality, as well as the cost-effectiveness of this therapy, in patients with chronic HF and predominantly CSA/CSR. ISRCTN19572887.

  1. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  2. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  3. A Parallel Trade Study Architecture for Design Optimization of Complex Systems

    NASA Technical Reports Server (NTRS)

    Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.

  4. Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.

  5. Radiation effects in reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Quinn, Heather

    2017-04-01

    Field-programmable gate arrays (FPGAs) are co-processing hardware used in image and signal processing. FPGA are programmed with custom implementations of an algorithm. These algorithms are highly parallel hardware designs that are faster than software implementations. This flexibility and speed has made FPGAs attractive for many space programs that need in situ, high-speed signal processing for data categorization and data compression. Most commercial FPGAs are affected by the space radiation environment, though. Problems with TID has restricted the use of flash-based FPGAs. Static random access memory based FPGAs must be mitigated to suppress errors from single-event upsets. This paper provides a review of radiation effects issues in reconfigurable FPGAs and discusses methods for mitigating these problems. With careful design it is possible to use these components effectively and resiliently.

  6. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  7. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  8. Deep learning of the regulatory grammar of yeast 5′ untranslated regions from 500,000 random sequences

    PubMed Central

    Groves, Benjamin; Kuchina, Anna; Rosenberg, Alexander B.; Jojic, Nebojsa; Fields, Stanley; Seelig, Georg

    2017-01-01

    Our ability to predict protein expression from DNA sequence alone remains poor, reflecting our limited understanding of cis-regulatory grammar and hampering the design of engineered genes for synthetic biology applications. Here, we generate a model that predicts the protein expression of the 5′ untranslated region (UTR) of mRNAs in the yeast Saccharomyces cerevisiae. We constructed a library of half a million 50-nucleotide-long random 5′ UTRs and assayed their activity in a massively parallel growth selection experiment. The resulting data allow us to quantify the impact on protein expression of Kozak sequence composition, upstream open reading frames (uORFs), and secondary structure. We trained a convolutional neural network (CNN) on the random library and showed that it performs well at predicting the protein expression of both a held-out set of the random 5′ UTRs as well as native S. cerevisiae 5′ UTRs. The model additionally was used to computationally evolve highly active 5′ UTRs. We confirmed experimentally that the great majority of the evolved sequences led to higher protein expression rates than the starting sequences, demonstrating the predictive power of this model. PMID:29097404

  9. A randomized, double-blind, placebo controlled, parallel group, efficacy study of alpha BRAIN® administered orally.

    PubMed

    Solomon, Todd M; Leech, Jarrett; deBros, Guy B; Murphy, Cynthia A; Budson, Andrew E; Vassey, Elizabeth A; Solomon, Paul R

    2016-03-01

    Alpha BRAIN® is a nootropic supplement that purports to enhance cognitive functioning in healthy adults. The goal of this study was to investigate the efficacy of this self-described cognitive enhancing nootropic on cognitive functioning in a group of healthy adults by utilizing a randomized, double blind, placebo-controlled design. A total of 63-treatment naïve individuals between 18 and 35 years of age completed the randomized, double-blind, placebo controlled trial. All participants completed a 2-week placebo run in before receiving active product, Alpha BRAIN® or new placebo, for 6 weeks. Participants undertook a battery of neuropsychological tests at randomization and at study completion. Primary outcome measures included a battery of neuropsychological tests and measures of sleep. Compared with placebo, Alpha BRAIN® significantly improved on tasks of delayed verbal recall and executive functioning. Results also indicated significant time-by-group interaction in delayed verbal recall for the Alpha BRAIN® group. The use of Alpha BRAIN® for 6 weeks significantly improved recent verbal memory when compared with controls, in a group of healthy adults. While the outcome of the study is encouraging, this is the first randomized controlled trial of Alpha BRAIN®, and the results merit further study. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Blood Pressure Response to Losartan and Continuous Positive Airway Pressure in Hypertension and Obstructive Sleep Apnea.

    PubMed

    Thunström, Erik; Manhem, Karin; Rosengren, Annika; Peker, Yüksel

    2016-02-01

    Obstructive sleep apnea (OSA) is common in people with hypertension, particularly resistant hypertension. Treatment with an antihypertensive agent alone is often insufficient to control hypertension in patients with OSA. To determine whether continuous positive airway pressure (CPAP) added to treatment with an antihypertensive agent has an impact on blood pressure (BP) levels. During the initial 6-week, two-center, open, prospective, case-control, parallel-design study (2:1; OSA/no-OSA), all patients began treatment with an angiotensin II receptor antagonist, losartan, 50 mg daily. In the second 6-week, sex-stratified, open, randomized, parallel-design study of the OSA group, all subjects continued to receive losartan and were randomly assigned to either nightly CPAP as add-on therapy or no CPAP. Twenty-four-hour BP monitoring included assessment every 15 minutes during daytime hours and every 20 minutes during the night. Ninety-one patients with untreated hypertension underwent a home sleep study (55 were found to have OSA; 36 were not). Losartan significantly reduced systolic, diastolic, and mean arterial BP in both groups (without OSA: 12.6, 7.2, and 9.0 mm Hg; with OSA: 9.8, 5.7, and 6.1 mm Hg). Add-on CPAP treatment had no significant changes in 24-hour BP values but did reduce nighttime systolic BP by 4.7 mm Hg. All 24-hour BP values were reduced significantly in the 13 patients with OSA who used CPAP at least 4 hours per night. Losartan reduced BP in OSA, but the reductions were less than in no-OSA. Add-on CPAP therapy resulted in no significant changes in 24-hour BP measures except in patients using CPAP efficiently. Clinical trial registered with www.clinicaltrials.gov (NCT00701428).​

  11. Effectiveness of adjuvant radiotherapy in patients with oropharyngeal and floor of mouth squamous cell carcinoma and concomitant histological verification of singular ipsilateral cervical lymph node metastasis (pN1-state)--a prospective multicenter randomized controlled clinical trial using a comprehensive cohort design.

    PubMed

    Moergel, Maximilian; Jahn-Eimermacher, Antje; Krummenauer, Frank; Reichert, Torsten E; Wagner, Wilfried; Wendt, Thomas G; Werner, Jochen A; Al-Nawas, Bilal

    2009-12-23

    Modern radiotherapy plays an important role in therapy of advanced head and neck carcinomas. However, no clinical studies have been published addressing the effectiveness of postoperative radiotherapy in patients with small tumor (pT1, pT2) and concomitant ipsilateral metastasis of a single lymph node (pN1), which would provide a basis for a general treatment recommendation. The present study is a non-blinded, prospective, multi-center randomized controlled trial (RCT). As the primary clinical endpoint, overall-survival in patients receiving postoperative radiation therapy vs. patients without adjuvant therapy following curative intended surgery is compared. The aim of the study is to enroll 560 adult males and females for 1:1 randomization to one of the two treatment arms (irradiation/no irradiation). Since patients with small tumor (T1/T2) but singular lymph node metastasis are rare and the amount of patients consenting to randomization is not predictable in advance, all patients rejecting randomization will be treated as preferred and enrolled in a prospective observational study (comprehensive cohort design) after giving informed consent. This observational part of the trial will be performed with maximum consistency to the treatment and observation protocol of the RCT. Because the impact of patient preference for a certain treatment option is not calculable, parallel design of RCT and observational study may provide a maximum of evidence and efficacy for evaluation of treatment outcome. Secondary clinical endpoints are as follows: incidence and time to tumor relapse (locoregional relapse, lymph node involvement and distant metastatic spread), Quality of life as reported by EORTC (QLQ-C30 with H&N 35 module), and time from operation to orofacial rehabilitation. All tumors represent a homogeneous clinical state and therefore additional investigation of protein expression levels within resection specimen may serve for establishment of surrogate parameters of patient outcome. The inherent challenges of a rare clinical condition (pN1) and two substantially different therapy arms would limit the practicality of a classical randomized study. The concept of a Comprehensive Cohort Design combines the preference of a randomized study, with the option of careful data interpretation within an observational study. ClinicalTrials.gov: NCT00964977.

  12. Text Messages Promoting Mental Imagery Increase Self-Reported Physical Activity in Older Adults: A Randomized Controlled Study.

    PubMed

    Robin, Nicolas; Toussaint, Lucette; Coudevylle, Guillaume R; Ruart, Shelly; Hue, Olivier; Sinnapah, Stephane

    2018-06-22

    This study tested whether text messages prompting adults 50 years of age and older to perform mental imagery would increase aerobic physical activity (APA) duration using a randomized parallel trial design. Participants were assigned to an Imagery 1, Imagery 2, or placebo group. For 4 weeks, each group was exposed to two conditions (morning text message vs. no morning text message). In the morning message condition, the imagery groups received a text message with the instruction to mentally imagine performing an APA, and the placebo group received a placebo message. All participants received an evening text message of "Did you do your cardio today? If yes, what did you do?" for 3 days per week. Participants of the imagery groups reported significantly more weekly minutes of APA in the morning text message condition compared with the no morning message condition. Electronic messages were effective at increasing minutes of APA.

  13. An international randomized study of a home-based self-management program for severe COPD: the COMET.

    PubMed

    Bourbeau, Jean; Casan, Pere; Tognella, Silvia; Haidl, Peter; Texereau, Joëlle B; Kessler, Romain

    2016-01-01

    Most hospitalizations and costs related to COPD are due to exacerbations and insufficient disease management. The COPD patient Management European Trial (COMET) is investigating a home-based multicomponent COPD self-management program designed to reduce exacerbations and hospital admissions. Multicenter parallel randomized controlled, open-label superiority trial. Thirty-three hospitals in four European countries. A total of 345 patients with Global initiative for chronic Obstructive Lung Disease III/IV COPD. The program includes extensive patient coaching by health care professionals to improve self-management (eg, develop skills to better manage their disease), an e-health platform for reporting frequent health status updates, rapid intervention when necessary, and oxygen therapy monitoring. Comparator is the usual management as per the center's routine practice. Yearly number of hospital days for acute care, exacerbation number, quality of life, deaths, and costs.

  14. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  15. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  16. Design of a randomized trial of diabetes genetic risk testing to motivate behavior change: the Genetic Counseling/lifestyle Change (GC/LC) Study for Diabetes Prevention.

    PubMed

    Grant, Richard W; Meigs, James B; Florez, Jose C; Park, Elyse R; Green, Robert C; Waxler, Jessica L; Delahanty, Linda M; O'Brien, Kelsey E

    2011-10-01

    The efficacy of diabetes genetic risk testing to motivate behavior change for diabetes prevention is currently unknown. This paper presents key issues in the design and implementation of one of the first randomized trials (The Genetic Counseling/Lifestyle Change (GC/LC) Study for Diabetes Prevention) to test whether knowledge of diabetes genetic risk can motivate patients to adopt healthier behaviors. Because individuals may react differently to receiving 'higher' vs 'lower' genetic risk results, we designed a 3-arm parallel group study to separately test the hypotheses that: (1) patients receiving 'higher' diabetes genetic risk results will increase healthy behaviors compared to untested controls, and (2) patients receiving 'lower' diabetes genetic risk results will decrease healthy behaviors compared to untested controls. In this paper we describe several challenges to implementing this study, including: (1) the application of a novel diabetes risk score derived from genetic epidemiology studies to a clinical population, (2) the use of the principle of Mendelian randomization to efficiently exclude 'average' diabetes genetic risk patients from the intervention, and (3) the development of a diabetes genetic risk counseling intervention that maintained the ethical need to motivate behavior change in both 'higher' and 'lower' diabetes genetic risk result recipients. Diabetes genetic risk scores were developed by aggregating the results of 36 diabetes-associated single nucleotide polymorphisms. Relative risk for type 2 diabetes was calculated using Framingham Offspring Study outcomes, grouped by quartiles into 'higher', 'average' (middle two quartiles) and 'lower' genetic risk. From these relative risks, revised absolute risks were estimated using the overall absolute risk for the study group. For study efficiency, we excluded all patients receiving 'average' diabetes risk results from the subsequent intervention. This post-randomization allocation strategy was justified because genotype represents a random allocation of parental alleles ('Mendelian randomization'). Finally, because it would be unethical to discourage participants to participate in diabetes prevention behaviors, we designed our two diabetes genetic risk counseling interventions (for 'higher' and 'lower' result recipients) so that both groups would be motivated despite receiving opposing results. For this initial assessment of the clinical implementation of genetic risk testing we assessed intermediate outcomes of attendance at a 12-week diabetes prevention course and changes in self-reported motivation. If effective, longer term studies with larger sample sizes will be needed to assess whether knowledge of diabetes genetic risk can help patients prevent diabetes. We designed a randomized clinical trial designed to explore the motivational impact of disclosing both higher than average and lower than average genetic risk for type 2 diabetes. This design allowed exploration of both increased risk and false reassurance, and has implications for future studies in translational genomics.

  17. The need to balance merits and limitations from different disciplines when considering the stepped wedge cluster randomized trial design.

    PubMed

    de Hoop, Esther; van der Tweel, Ingeborg; van der Graaf, Rieke; Moons, Karel G M; van Delden, Johannes J M; Reitsma, Johannes B; Koffijberg, Hendrik

    2015-10-30

    Various papers have addressed pros and cons of the stepped wedge cluster randomized trial design (SWD). However, some issues have not or only limitedly been addressed. Our aim was to provide a comprehensive overview of all merits and limitations of the SWD to assist researchers, reviewers and medical ethics committees when deciding on the appropriateness of the SWD for a particular study. We performed an initial search to identify articles with a methodological focus on the SWD, and categorized and discussed all reported advantages and disadvantages of the SWD. Additional aspects were identified during multidisciplinary meetings in which ethicists, biostatisticians, clinical epidemiologists and health economists participated. All aspects of the SWD were compared to the parallel group cluster randomized design. We categorized the merits and limitations of the SWD to distinct phases in the design and conduct of such studies, highlighting that their impact may vary depending on the context of the study or that benefits may be offset by drawbacks across study phases. Furthermore, a real-life illustration is provided. New aspects are identified within all disciplines. Examples of newly identified aspects of an SWD are: the possibility to measure a treatment effect in each cluster to examine the (in)consistency in effects across clusters, the detrimental effect of lower than expected inclusion rates, deviation from the ordinary informed consent process and the question whether studies using the SWD are likely to have sufficient social value. Discussions are provided on e.g. clinical equipoise, social value, health economical decision making, number of study arms, and interim analyses. Deciding on the use of the SWD involves aspects and considerations from different disciplines not all of which have been discussed before. Pros and cons of this design should be balanced in comparison to other feasible design options as to choose the optimal design for a particular intervention study.

  18. Parallel Ray Tracing Using the Message Passing Interface

    DTIC Science & Technology

    2007-09-01

    software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing

  19. A Comparison of Parallelism in Interface Designs for Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans

    2004-01-01

    In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants completed an abstract task. The five basic designs…

  20. Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.

    PubMed

    Su, Yu-Xuan; Tu, Yu-Kang

    2018-05-22

    Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.

  1. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  2. Effectiveness of adjuvant radiotherapy in patients with oropharyngeal and floor of mouth squamous cell carcinoma and concomitant histological verification of singular ipsilateral cervical lymph node metastasis (pN1-state) - A prospective multicenter randomized controlled clinical trial using a comprehensive cohort design

    PubMed Central

    2009-01-01

    Background Modern radiotherapy plays an important role in therapy of advanced head and neck carcinomas. However, no clinical studies have been published addressing the effectiveness of postoperative radiotherapy in patients with small tumor (pT1, pT2) and concomitant ipsilateral metastasis of a single lymph node (pN1), which would provide a basis for a general treatment recommendation. Methods/Design The present study is a non-blinded, prospective, multi-center randomized controlled trial (RCT). As the primary clinical endpoint, overall-survival in patients receiving postoperative radiation therapy vs. patients without adjuvant therapy following curative intended surgery is compared. The aim of the study is to enroll 560 adult males and females for 1:1 randomization to one of the two treatment arms (irradiation/no irradiation). Since patients with small tumor (T1/T2) but singular lymph node metastasis are rare and the amount of patients consenting to randomization is not predictable in advance, all patients rejecting randomization will be treated as preferred and enrolled in a prospective observational study (comprehensive cohort design) after giving informed consent. This observational part of the trial will be performed with maximum consistency to the treatment and observation protocol of the RCT. Because the impact of patient preference for a certain treatment option is not calculable, parallel design of RCT and observational study may provide a maximum of evidence and efficacy for evaluation of treatment outcome. Secondary clinical endpoints are as follows: incidence and time to tumor relapse (locoregional relapse, lymph node involvement and distant metastatic spread), Quality of life as reported by EORTC (QLQ-C30 with H&N 35 module), and time from operation to orofacial rehabilitation. All tumors represent a homogeneous clinical state and therefore additional investigation of protein expression levels within resection specimen may serve for establishment of surrogate parameters of patient outcome. Conclusion The inherent challenges of a rare clinical condition (pN1) and two substantially different therapy arms would limit the practicality of a classical randomized study. The concept of a Comprehensive Cohort Design combines the preference of a randomized study, with the option of careful data interpretation within an observational study. Trial registration ClinicalTrials.gov: NCT00964977 PMID:20028566

  3. Pre-consultation educational group intervention to improve shared decision-making in postmastectomy breast reconstruction: study protocol for a pilot randomized controlled trial.

    PubMed

    Platt, Jennica; Baxter, Nancy; Jones, Jennifer; Metcalfe, Kelly; Causarano, Natalie; Hofer, Stefan O P; O'Neill, Anne; Cheng, Terry; Starenkyj, Elizabeth; Zhong, Toni

    2013-07-06

    The Pre-Consultation Educational Group INTERVENTION pilot study seeks to assess the feasibility and inform the optimal design for a definitive randomized controlled trial that aims to improve the quality of decision-making in postmastectomy breast reconstruction patients. This is a mixed-methods pilot feasibility randomized controlled trial that will follow a single-center, 1:1 allocation, two-arm parallel group superiority design. The University Health Network, a tertiary care cancer center in Toronto, Canada. Adult women referred to one of three plastic and reconstructive surgeons for delayed breast reconstruction or prophylactic mastectomy with immediate breast reconstruction. We designed a multi-disciplinary educational group workshop that incorporates the key components of shared decision-making, decision-support, and psychosocial support for cancer survivors prior to the initial surgical consult. The intervention consists of didactic lectures by a plastic surgeon and nurse specialist on breast reconstruction choices, pre- and postoperative care; a value-clarification exercise led by a social worker; and discussions with a breast reconstruction patient. Usual care includes access to an informational booklet, website, and patient volunteer if desired. Expected pilot outcomes include feasibility, recruitment, and retention targets. Acceptability of intervention and full trial outcomes will be established through qualitative interviews. Trial outcomes will include decision-quality measures, patient-reported outcomes, and service outcomes, and the treatment effect estimate and variability will be used to inform the sample size calculation for a full trial. Our pilot study seeks to identify the (1) feasibility, acceptability, and design of a definitive RCT and (2) the optimal content and delivery of our proposed educational group intervention. Thirty patients have been recruited to date (8 April 2013), of whom 15 have been randomized to one of three decision support workshops. The trial will close as planned in May 2013. NCT01857882.

  4. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  5. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  6. Parallel digital forensics infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less

  7. Quincke random walkers

    NASA Astrophysics Data System (ADS)

    Pradillo, Gerardo; Heintz, Aneesh; Vlahovska, Petia

    2017-11-01

    The spontaneous rotation of a sphere in an applied uniform DC electric field (Quincke effect) has been utilized to engineer self-propelled particles: if the sphere is initially resting on a surface, it rolls. The Quincke rollers have been widely used as a model system to study collective behavior in ``active'' suspensions. If the applied field is DC, an isolated Quincke roller follows a straight line trajectory. In this talk, we discuss the design of a Quincke roller that executes a random-walk-like behavior. We utilize AC field - upon reversal of the field direction a fluctuation in the axis of rotation (which is degenerate in the plane perpendicular to the field and parallel to the surface) introduces randomness in the direction of motion. The MSD of an isolated Quincke walker depends on frequency, amplitude, and waveform of the electric field. Experiment and theory are compared. We also investigate the collective behavior of Quincke walkers,the transport of inert particles in a bath of Quincke walkers, and the spontaneous motion of a drop containing Quincke active particle. supported by NSF Grant CBET 1437545.

  8. Randomized controlled trial of video self-modeling following speech restructuring treatment for stuttering.

    PubMed

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-08-01

    In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech restructuring treatment. Post treatment, participants were randomly assigned to 2 trial arms: standard maintenance and standard maintenance plus VSM. Participants in the latter arm viewed stutter-free videos of themselves each day for 1 month. The addition of VSM did not improve speech outcomes, as measured by percent syllables stuttered, at either 1 or 6 months postrandomization. However, at the latter assessment, self-rating of worst stuttering severity by the VSM group was 10% better than that of the control group, and satisfaction with speech fluency was 20% better. Quality of life was also better for the VSM group, which was mildly to moderately impaired compared with moderate impairment in the control group. VSM intervention after treatment was associated with improvements in self-reported outcomes. The clinical implications of this finding are discussed.

  9. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  10. Efficient assessment of efficacy in post-traumatic peripheral neuropathic pain patients: pregabalin in a randomized, placebo-controlled, crossover study

    PubMed Central

    Jenkins, Tim M; Smart, Trevor S; Hackman, Frances; Cooke, Carol; Tan, Keith KC

    2012-01-01

    Background: Detecting the efficacy of novel analgesic agents in neuropathic pain is challenging. There is a critical need for study designs with the desirable characteristics of assay sensitivity, low placebo response, reliable pain recordings, low cost, short duration of exposure to test drug and placebo, and relevant and recruitable population. Methods: We designed a proof-of-concept, double-blind, randomized, placebo-controlled, crossover study in patients with post-traumatic peripheral neuropathic pain (PTNP) to evaluate whether such a study design had the potential to detect efficacious agents. Pregabalin, known to be efficacious in neuropathic pain, was used as the active analgesic. We also assessed physical activity throughout the study. Results: Twenty-five adults (20–70 years of age) with PTNP for ≥3 months entered a screening week and were then randomized to one of the two following treatment sequences: (1) pregabalin followed by placebo or (2) placebo followed by pregabalin. These 2-week treatment periods were separated by a 2-week washout period. Patients on pregabalin treatment received escalating doses to a final dosage of 300 mg/day (days 5–15). In an attempt to minimize placebo response, patients received placebo treatment during the screening week and the 2-week washout period. Average daily pain scores (primary endpoint) were significantly reduced for pregabalin versus placebo, with a mean treatment difference of −0.81 (95% confidence interval: −1.45 to −0.17; P = 0.015). Conclusion: The efficacy of pregabalin was similar to that identified in a large, parallel group trial in PTNP. Therefore, this efficient crossover study design has potential utility for future proof-of-concept studies in neuropathic pain. PMID:22888270

  11. Capacity-building and clinical competence in infectious disease in Uganda: a mixed-design study with pre/post and cluster-randomized trial components.

    PubMed

    Weaver, Marcia R; Crozier, Ian; Eleku, Simon; Makanga, Gyaviira; Mpanga Sebuyira, Lydia; Nyakake, Janepher; Thompson, MaryLou; Willis, Kelly

    2012-01-01

    Best practices for training mid-level practitioners (MLPs) to improve global health-services are not well-characterized. Two hypotheses were: 1) Integrated Management of Infectious Disease (IMID) training would improve clinical competence as tested with a single arm, pre-post design, and 2) on-site support (OSS) would yield additional improvements as tested with a cluster-randomized trial. Thirty-six Ugandan health facilities (randomized 1∶1 to parallel OSS and control arms) enrolled two MLPs each. All MLPs participated in IMID (3-week core course, two 1-week boost sessions, distance learning). After the 3-week course, OSS-arm trainees participated in monthly OSS. Twelve written case scenarios tested clinical competencies in HIV/AIDS, tuberculosis, malaria, and other infectious diseases. Each participant completed different randomly-assigned blocks of four scenarios before IMID (t0), after 3-week course (t1), and after second boost course (t2, 24 weeks after t1). Scoring guides were harmonized with IMID content and Ugandan national policy. Score analyses used a linear mixed-effects model. The primary outcome measure was longitudinal change in scenario scores. Scores were available for 856 scenarios. Mean correct scores at t0, t1, and t2 were 39.3%, 49.1%, and 49.6%, respectively. Mean score increases (95% CI, p-value) for t0-t1 (pre-post period) and t1-t2 (parallel-arm period) were 12.1 ((9.6, 14.6), p<0.001) and -0.6 ((-3.1, +1.9), p = 0.647) percent for OSS arm and 7.5 ((5.0, 10.0), p<0.001) and 1.6 ((-1.0, +4.1), p = 0.225) for control arm. The estimated mean difference in t1 to t2 score change, comparing arm A (participated in OSS) vs. arm B was -2.2 ((-5.8, +1.4), p = 0.237). From t0-t2, mean scores increased for all 12 scenarios. Clinical competence increased significantly after a 3-week core course; improvement persisted for 24 weeks. No additional impact of OSS was observed. Data on clinical practice, facility-level performance and health outcomes will complete assessment of overall impact of IMID and OSS. ClinicalTrials.gov NCT01190540.

  12. Integrated Task And Data Parallel Programming: Language Design

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  13. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  14. Effectiveness of Virtual Reality Exercises in STroke Rehabilitation (EVREST): rationale, design, and protocol of a pilot randomized clinical trial assessing the Wii gaming system.

    PubMed

    Saposnik, G; Mamdani, M; Bayley, M; Thorpe, K E; Hall, J; Cohen, L G; Teasell, R

    2010-02-01

    Evidence suggests that increasing intensity of rehabilitation results in better motor recovery. Limited evidence is available on the effectiveness of an interactive virtual reality gaming system for stroke rehabilitation. EVREST was designed to evaluate feasibility, safety and efficacy of using the Nintendo Wii gaming virtual reality (VRWii) technology to improve arm recovery in stroke patients. Pilot randomized study comparing, VRWii versus recreational therapy (RT) in patients receiving standard rehabilitation within six months of stroke with a motor deficit of > or =3 on the Chedoke-McMaster Scale (arm). In this study we expect to randomize 20 patients. All participants (age 18-85) will receive customary rehabilitative treatment consistent of a standardized protocol (eight sessions, 60 min each, over a two-week period). The primary feasibility outcome is the total time receiving the intervention. The primary safety outcome is the proportion of patients experiencing intervention-related adverse events during the study period. Efficacy, a secondary outcome measure, will be measured by the Wolf Motor Function Test, Box and Block Test, and Stroke Impact Scale at the four-week follow-up visit. From November, 2008 to September, 2009 21 patients were randomized to VRWii or RT. Mean age, 61 (range 41-83) years. Mean time from stroke onset 25 (range 10-56) days. EVREST is the first randomized parallel controlled trial assessing the feasibility, safety, and efficacy of virtual reality using Wii gaming technology in stroke rehabilitation. The results of this study will serve as the basis for a larger multicentre trial. ClinicalTrials.gov registration# NTC692523.

  15. Study protocol: a randomized controlled trial investigating the effects of a psychosexual training program for adolescents with autism spectrum disorder.

    PubMed

    Visser, Kirsten; Greaves-Lord, Kirstin; Tick, Nouchka T; Verhulst, Frank C; Maras, Athanasios; van der Vegt, Esther J M

    2015-08-28

    Previous research shows that adolescents with autism spectrum disorder (ASD) run several risks in their psychosexual development and that these adolescents can have limited access to reliable information on puberty and sexuality, emphasizing the need for specific guidance of adolescents with ASD in their psychosexual development. Few studies have investigated the effects of psychosexual training programs for adolescents with ASD and to date no randomized controlled trials are available to study the effects of psychosexual interventions for this target group. The randomized controlled trial (RCT) described in this study protocol aims to investigate the effects of the Tackling Teenage Training (TTT) program on the psychosexual development of adolescents with ASD. This parallel clinical trial, conducted in the South-West of the Netherlands, has a simple equal randomization design with an intervention and a waiting-list control condition. Two hundred adolescents and their parents participate in this study. We assess the participants in both conditions using self-report as well as parent-report questionnaires at three time points during 1 year: at baseline (T1), post-treatment (T2), and for follow-up (T3). To our knowledge, the current study is the first that uses a randomized controlled design to study the effects of a psychosexual training program for adolescents with ASD. It has a number of methodological strengths, namely a large sample size, a wide range of functionally relevant outcome measures, the use of multiple informants, and a standardized research and intervention protocol. Also some limitations of the described study are identified, for instance not making a comparison between two treatment conditions, and no use of blinded observational measures to investigate the ecological validity of the research results. Dutch Trial Register NTR2860. Registered on 20 April 2011.

  16. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review.

    PubMed

    Gohari, Faeze; Baradaran, Hamid Reza; Tabatabaee, Morteza; Anijidani, Shabnam; Mohammadpour Touserkani, Fatemeh; Atlasi, Rasha; Razmgir, Maryam

    2015-01-01

    To determine the quality of randomized controlled clinical trial (RCT) reports in diabetes research in Iran. Systematized review. We included RCTs conducted on diabetes mellitus in Iran. Animal studies, educational interventions, and non-randomized trials were excluded. We excluded duplicated publications reporting the same groups of participants and intervention. Two independent reviewers identify all eligible articles specifically designed data extraction form. We searched through international databases; Scopus, ProQuest, EBSCO, Science Direct, Web of Science, Cochrane Library, PubMed; and national databases (In Persian language) such as Magiran, Scientific Information Database (SID) and IranMedex from January 1995 to January of 2013 Two investigators assessed the quality of reporting by CONSORT 2010 (Consolidated Standards of Reporting Trials) checklist statemen.t,. Discrepancies were resolved by third reviewer consulting. One hundred and eight five (185) studies were included and appraised. Half of them (55.7 %) were published in Iranian journals. Most (89.7 %) were parallel RCTs, and being performed on type2 diabetic patients (77.8 %). Less than half of the CONSORT items (43.2 %) were reported in studies, totally. The reporting of randomization and blinding were poor. A few studies 15.1 % mentioned the method of random sequence generation and strategy of allocation concealment. And only 34.8 % of trials report how blinding was applied. The findings of this study show that the quality of RCTs conducted in Iran in diabetes research seems suboptimal and the reporting is also incomplete however an increasing trend of improvement can be seen over time. Therefore, it is suggested Iranian researchers pay much more attention to design and methodological quality in conducting and reporting of diabetes RCTs.

  17. Primary prevention of stroke and cardiovascular disease in the community (PREVENTS): Methodology of a health wellness coaching intervention to reduce stroke and cardiovascular disease risk, a randomized clinical trial.

    PubMed

    Mahon, Susan; Krishnamurthi, Rita; Vandal, Alain; Witt, Emma; Barker-Collo, Suzanne; Parmar, Priya; Theadom, Alice; Barber, Alan; Arroll, Bruce; Rush, Elaine; Elder, Hinemoa; Dyer, Jesse; Feigin, Valery

    2018-02-01

    Rationale Stroke is a major cause of death and disability worldwide, yet 80% of strokes can be prevented through modifications of risk factors and lifestyle and by medication. While management strategies for primary stroke prevention in high cardiovascular disease risk individuals are well established, they are underutilized and existing practice of primary stroke prevention are inadequate. Behavioral interventions are emerging as highly promising strategies to improve cardiovascular disease risk factor management. Health Wellness Coaching is an innovative, patient-focused and cost-effective, multidimensional psychological intervention designed to motivate participants to adhere to recommended medication and lifestyle changes and has been shown to improve health and enhance well-being. Aims and/or hypothesis To determine the effectiveness of Health Wellness Coaching for primary stroke prevention in an ethnically diverse sample including Māori, Pacific Island, New Zealand European and Asian participants. Design A parallel, prospective, randomized, open-treatment, single-blinded end-point trial. Participants include 320 adults with absolute five-year cardiovascular disease risk ≥ 10%, calculated using the PREDICT web-based clinical tool. Randomization will be to Health Wellness Coaching or usual care groups. Participants randomized to Health Wellness Coaching will receive 15 coaching sessions over nine months. Study outcomes A substantial relative risk reduction of five-year cardiovascular disease risk at nine months post-randomization, which is defined as 10% relative risk reduction among those at moderate five-year cardiovascular disease risk (10-15%) and 25% among those at high risk (>15%). Discussion This clinical trial will determine whether Health Wellness Coaching is an effective intervention for reducing modifiable risk factors, and hence decrease the risk of stroke and cardiovascular disease.

  18. The efficacy of traditional acupuncture on patients with chronic neck pain: study protocol of a randomized controlled trial.

    PubMed

    Yang, Yiling; Yan, Xiaoxia; Deng, Hongmei; Zeng, Dian; Huang, Jianpeng; Fu, Wenbin; Xu, Nenggui; Liu, Jianhua

    2017-07-10

    A large number of randomized trials on the use of acupuncture to treat chronic pain have been conducted. However, there is considerable controversy regarding the effectiveness of acupuncture. We designed a randomized trial involving patients with chronic neck pain (CNP) to investigate whether acupuncture is more effective than a placebo in treating CNP. A five-arm, parallel, single-blinded, randomized, sham-controlled trial was designed. Patients with CNP of more than 3 months' duration are being recruited from Guangdong Provincial Hospital of Chinese Medicine (China). Following examination, 175 patients will be randomized into one of five groups (35 patients in each group) as follows: a traditional acupuncture group (group A), a shallow-puncture group (group B), a non-acupoint acupuncture group (group C), a non-acupoint shallow-puncture group (group D) and a sham-puncture group (group E). The interventions will last for 20 min and will be carried out twice a week for 5 weeks. The primary outcome will be evaluated by changes in the Northwick Park Neck Pain Questionnaire (NPQ). Secondary outcomes will be measured by the pain threshold, the Short Form McGill Pain Questionnaire-2 (SF-MPQ-2), the 36-Item Short-Form Health Survey (SF-36) and diary entries. Analysis of the data will be performed at baseline, at the end of the intervention and at 3 months' follow-up. The safety of acupuncture will be evaluated at each treatment period. The purpose of this trial is to determine whether traditional acupuncture is more effective for chronic pain relief than sham acupuncture in adults with CNP, and to determine which type of sham acupuncture is the optimal control for clinical trials. Chinese Clinical Trial Registry: ChiCTR-IOR-15006886 . Registered on 2 July 2015.

  19. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  20. A landmark recognition and tracking experiment for flight on the Shuttle/Advanced Technology Laboratory (ATL)

    NASA Technical Reports Server (NTRS)

    Welch, J. D.

    1975-01-01

    The preliminary design of an experiment for landmark recognition and tracking from the Shuttle/Advanced Technology Laboratory is described. It makes use of parallel coherent optical processing to perform correlation tests between landmarks observed passively with a telescope and previously made holographic matched filters. The experimental equipment including the optics, the low power laser, the random access file of matched filters and the electro-optical readout device are described. A real time optically excited liquid crystal device is recommended for performing the input non-coherent optical to coherent optical interface function. A development program leading to a flight experiment in 1981 is outlined.

  1. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  2. Parallel Treatments Design: A Nested Single Subject Design for Comparing Instructional Procedures.

    ERIC Educational Resources Information Center

    Gast, David L.; Wolery, Mark

    1988-01-01

    This paper describes the parallel treatments design, a nested single subject experimental design that combines two concurrently implemented multiple probe designs, allows control for effects of extraneous variables through counterbalancing, and replicates its effects across behaviors. Procedural guidelines for the design's use and issues related…

  3. The effects of baroreflex activation therapy on blood pressure and sympathetic function in patients with refractory hypertension: the rationale and design of the Nordic BAT study.

    PubMed

    Gordin, Daniel; Fadl Elmula, Fadl Elmula M; Andersson, Bert; Gottsäter, Anders; Elf, Johan; Kahan, Thomas; Christensen, Kent Lodberg; Vikatmaa, Pirkka; Vikatmaa, Leena; Bastholm Olesen, Thomas; Groop, Per-Henrik; Olsen, Michael Hecht; Tikkanen, Ilkka

    2017-10-01

    To explore the effects of baroreflex activation therapy (BAT) on hypertension in patients with treatment resistant or refractory hypertension. This investigator-initiated randomized, double-blind, 1:1 parallel-design clinical trial will include 100 patients with refractory hypertension from 6 tertiary referral hypertension centers in the Nordic countries. A Barostim Neo System will be implanted and after 1 month patients will be randomized to either BAT for 16 months or continuous pharmacotherapy (BAT off) for 8 months followed by BAT for 8 months. A second randomization will take place after 16 months to BAT or BAT off for 3 months. Eligible patients have a daytime systolic ambulatory blood pressure (ABPM) of  ≥145 mm Hg, and/or a daytime diastolic ABPM of  ≥95 mm Hg after witnessed drug intake (including  ≥3 antihypertensive drugs, preferably including a diuretic). The primary end point is the reduction in 24-hour systolic ABPM by BAT at 8 months, as compared to pharmacotherapy. Secondary and tertiary endpoints are effects of BAT on home and office blood pressures, measures of indices of cardiac and vascular structure and function during follow-up, and safety. This academic initiative will increase the understanding of mechanisms and role of BAT in the refractory hypertension.

  4. The rationale and design of the Beta-blocker to LOwer CArdiovascular Dialysis Events (BLOCADE) Feasibility Study.

    PubMed

    Roberts, Matthew A; Pilmore, Helen L; Ierino, Francesco L; Badve, Sunil V; Cass, Alan; Garg, Amit X; Hawley, Carmel M; Isbel, Nicole M; Krum, Henry; Pascoe, Elaine M; Tonkin, Andrew M; Vergara, Liza A; Perkovic, Vlado

    2015-03-01

    The Beta-blocker to LOwer CArdiovascular Dialysis Events (BLOCADE) Feasibility Study aims to determine the feasibility of a large-scale randomized controlled trial with clinical endpoints comparing the beta-blocking agent carvedilol with placebo in patients receiving dialysis. The BLOCADE Feasibility Study is a randomized, double-blind, placebo-controlled, parallel group feasibility study comparing the beta-blocking agent carvedilol with placebo. Patients receiving dialysis for ≥3 months and who are aged ≥50 years, or who are ≥18 years and have diabetes or cardiovascular disease, were eligible. The primary outcome was the proportion of participants who complete a 6-week run-in phase in which all participants received carvedilol titrated from 3.125 mg twice daily to 6.25 mg twice daily. Other measures included how many patients are screened, the proportion recruited, the overall recruitment rate, the proportion of participants who remain on study drug for 12 months and the incidence of intra-dialytic hypotension while on randomized treatment. The BLOCADE Feasibility Study commenced recruiting in May 2011 and involves 11 sites in Australia and New Zealand. The BLOCADE Feasibility Study will inform the design of a larger clinical endpoint study to determine whether beta-blocking agents provide benefit to patients receiving dialysis, and define whether such a study is feasible. © 2014 Asian Pacific Society of Nephrology.

  5. Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.

    DTIC Science & Technology

    1996-04-01

    This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In

  6. The effectiveness of a web-based brief alcohol intervention in reducing heavy drinking among adolescents aged 15 to 20 years with a low educational background: study protocol for a randomized controlled trial.

    PubMed

    Voogt, Carmen V; Poelen, Evelien A P; Lemmers, Lex A C J; Engels, Rutger C M E

    2012-06-15

    The serious negative health consequences of heavy drinking among adolescents is cause for concern, especially among adolescents aged 15 to 20 years with a low educational background. In the Netherlands, there is a lack of alcohol prevention programs directed to the drinking patterns of this specific target group. The study described in this protocol will test the effectiveness of a web-based brief alcohol intervention that aims to reduce alcohol use among heavy drinking adolescents aged 15 to 20 years with a low educational background. The effectiveness of the What Do You Drink (WDYD) web-based brief alcohol intervention will be tested among 750 low-educated, heavy drinking adolescents. It will use a two-arm parallel group cluster randomized controlled trial. Classes of adolescents from educational institutions will be randomly assigned to either the experimental (n = 375: web-based brief alcohol intervention) or control condition (n = 375: no intervention). Primary outcomes measures will be: 1) the percentage of participants who drink within the normative limits of the Dutch National Health Council for low-risk drinking, 2) reductions in mean weekly alcohol consumption, and 3) frequency of binge drinking. The secondary outcome measures include the alcohol-related cognitions, attitudes, self-efficacy, and subjective norms, which will be measured at baseline and at one and six months after the intervention. This study protocol presents the study design of a two-arm parallel-group randomized controlled trial to evaluate the effectiveness of the WDYD web-based brief alcohol intervention. We hypothesized a reduction in mean weekly alcohol consumption and in the frequency of binge drinking in the experimental condition, resulting from the web-based brief alcohol intervention, compared to the control condition. Netherlands Trial Register NTR2971.

  7. Whole body vibration for older persons: an open randomized, multicentre, parallel, clinical trial

    PubMed Central

    2011-01-01

    Background Institutionalized older persons have a poor functional capacity. Including physical exercise in their routine activities decreases their frailty and improves their quality of life. Whole-body vibration (WBV) training is a type of exercise that seems beneficial in frail older persons to improve their functional mobility, but the evidence is inconclusive. This trial will compare the results of exercise with WBV and exercise without WBV in improving body balance, muscle performance and fall prevention in institutionalized older persons. Methods/Design An open, multicentre and parallel randomized clinical trial with blinded assessment. 160 nursing home residents aged over 65 years and of both sexes will be identified to participate in the study. Participants will be centrally randomised and allocated to interventions (vibration or exercise group) by telephone. The vibration group will perform static/dynamic exercises (balance and resistance training) on a vibratory platform (Frequency: 30-35 Hz; Amplitude: 2-4 mm) over a six-week training period (3 sessions/week). The exercise group will perform the same exercise protocol but without a vibration stimuli platform. The primary outcome measure is the static/dynamic body balance. Secondary outcomes are muscle strength and, number of new falls. Follow-up measurements will be collected at 6 weeks and at 6 months after randomization. Efficacy will be analysed on an intention-to-treat (ITT) basis and 'per protocol'. The effects of the intervention will be evaluated using the "t" test, Mann-Witney test, or Chi-square test, depending on the type of outcome. The final analysis will be performed 6 weeks and 6 months after randomization. Discussion This study will help to clarify whether WBV training improves body balance, gait mobility and muscle strength in frail older persons living in nursing homes. As far as we know, this will be the first study to evaluate the efficacy of WBV for the prevention of falls. Trial Registration ClinicalTrials.gov: NCT01375790 PMID:22192313

  8. Comparison of the safety and immunogenicity of live attenuated and inactivated hepatitis A vaccine in healthy Chinese children aged 18 months to 16 years: results from a randomized, parallel controlled, phase IV study.

    PubMed

    Ma, F; Yang, J; Kang, G; Sun, Q; Lu, P; Zhao, Y; Wang, Z; Luo, J; Wang, Z

    2016-09-01

    For large-scale immunization of children with hepatitis A (HA) vaccines in China, accurately designed studies comparing the safety and immunogenicity of the live attenuated HA vaccine (HA-L) and inactivated HA vaccine (HA-I) are necessary. A randomized, parallel controlled, phase IV clinical trial was conducted with 6000 healthy children aged 18 months to 16 years. HA-L or HA-I was administered at a ratio of 1: 1 to randomized selected participants. The safety and immunogenicity were evaluated. Both HA-L and HA-I were well tolerated by all participants. The immunogenicity results showed that the seroconversion rates (HA-L versus HA-I: 98.0% versus 100%, respectively, p >0.05), and geometric mean concentrations in participants negative for antibodies against HA virus IgG (anti-HAV IgG) before vaccination did not differ significantly between the two types of vaccines (HA-L versus HA-I first dose: 898.9 versus 886.2 mIU/mL, respectively, p >0.05). After administration of the booster dose of HA-I, the geometric mean concentrations of anti-HAV IgG (HA-I booster dose: 2591.2 mIU/mL) was higher than that after the first dose (p <0.05) and that reported in participants administered HA-L (p <0.05). Additionally, 12 (25%) of the 48 randomized selected participants who received HA-L tested positive for HA antigen in stool samples. Hence, both HA-L and HA-I could provide acceptable immunogenicity in children. The effects of long-term immunogenicity after natural exposure to wild-type HA virus and the possibility of mutational shifts of the live vaccine virus in the field need to be studied in more detail. Copyright © 2016. Published by Elsevier Ltd.

  9. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  10. Rationale, Design, and Baseline Characteristics of the Utopia Trial for Preventing Diabetic Atherosclerosis Using an SGLT2 Inhibitor: A Prospective, Randomized, Open-Label, Parallel-Group Comparative Study.

    PubMed

    Katakami, Naoto; Mita, Tomoya; Yoshii, Hidenori; Shiraiwa, Toshihiko; Yasuda, Tetsuyuki; Okada, Yosuke; Umayahara, Yutaka; Kaneto, Hideaki; Osonoi, Takeshi; Yamamoto, Tsunehiko; Kuribayashi, Nobuichi; Maeda, Kazuhisa; Yokoyama, Hiroki; Kosugi, Keisuke; Ohtoshi, Kentaro; Hayashi, Isao; Sumitani, Satoru; Tsugawa, Mamiko; Ohashi, Makoto; Taki, Hideki; Nakamura, Tadashi; Kawashima, Satoshi; Sato, Yasunori; Watada, Hirotaka; Shimomura, Iichiro

    2017-10-01

    Sodium-glucose co-transporter-2 (SGLT2) inhibitors are anti-diabetic agents that improve glycemic control with a low risk of hypoglycemia and ameliorate a variety of cardiovascular risk factors. The aim of the ongoing study described herein is to investigate the preventive effects of tofogliflozin, a potent and selective SGLT2 inhibitor, on the progression of atherosclerosis in subjects with type 2 diabetes (T2DM) using carotid intima-media thickness (IMT), an established marker of cardiovascular disease (CVD), as a marker. The Study of Using Tofogliflozin for Possible better Intervention against Atherosclerosis for type 2 diabetes patients (UTOPIA) trial is a prospective, randomized, open-label, blinded-endpoint, multicenter, and parallel-group comparative study. The aim was to recruit a total of 340 subjects with T2DM but no history of apparent CVD at 24 clinical sites and randomly allocate these to a tofogliflozin treatment group or a conventional treatment group using drugs other than SGLT2 inhibitors. As primary outcomes, changes in mean and maximum IMT of the common carotid artery during a 104-week treatment period will be measured by carotid echography. Secondary outcomes include changes in glycemic control, parameters related to β-cell function and diabetic nephropathy, the occurrence of CVD and adverse events, and biochemical measurements reflecting vascular function. This is the first study to address the effects of SGLT2 inhibitors on the progression of carotid IMT in subjects with T2DM without a history of CVD. The results will be available in the very near future, and these findings are expected to provide clinical data that will be helpful in the prevention of diabetic atherosclerosis and subsequent CVD. Kowa Co., Ltd. UMIN000017607.

  11. The impact of a mobile application-based treatment for urinary incontinence in adult women: Design of a mixed-methods randomized controlled trial in a primary care setting.

    PubMed

    Loohuis, Anne M M; Wessels, Nienke J; Jellema, Petra; Vermeulen, Karin M; Slieker-Ten Hove, Marijke C; van Gemert-Pijnen, Julia E W C; Berger, Marjolein Y; Dekker, Janny H; Blanker, Marco H

    2018-02-02

    We aim to assess whether a purpose-developed mobile application (app) is non-inferior regarding effectiveness and cost-effective when used to treat women with urinary incontinence (UI), as compared to care as usual in Dutch primary care. Additionally, we will explore the expectations and experiences of patients and care providers regarding app usage. A mixed-methods study will be performed, combining a pragmatic, randomized-controlled, non-inferiority trial with an extensive process evaluation. Women aged ≥18 years, suffering from UI ≥ 2 times per week and with access to a smartphone or tablet are eligible to participate. The primary outcome will be the change in UI symptom scores at 4 months after randomization, as assessed by the International Consultation on Incontinence Modular Questionnaire UI Short Form. Secondary outcomes will be the change in UI symptom scores at 12 months, as well as the patient-reported global impression of improvement, quality of life, change in sexual functioning, UI episodes per day, and costs at 4 and 12 months. In parallel, we will perform an extensive process evaluation to assess the expectations and experiences of patients and care providers regarding app usage, making use of interviews, focus group sessions, and log data analysis. This study will assess both the effectiveness and cost-effectiveness of app-based treatment for UI. The combination with the process evaluation, which will be performed in parallel, should also give valuable insights into the contextual factors that influence the effectiveness of such a treatment. © 2018 The Authors. Neurourology and Urodynamics Published by Wiley Periodicals, Inc.

  12. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  13. Weight loss intervention for young adults using mobile technology: design and rationale of a randomized controlled trial - Cell Phone Intervention for You (CITY).

    PubMed

    Batch, Bryan C; Tyson, Crystal; Bagwell, Jacqueline; Corsino, Leonor; Intille, Stephen; Lin, Pao-Hwa; Lazenka, Tony; Bennett, Gary; Bosworth, Hayden B; Voils, Corrine; Grambow, Steven; Sutton, Aziza; Bordogna, Rachel; Pangborn, Matthew; Schwager, Jenifer; Pilewski, Kate; Caccia, Carla; Burroughs, Jasmine; Svetkey, Laura P

    2014-03-01

    The obesity epidemic has spread to young adults, leading to significant public health implications later in adulthood. Intervention in early adulthood may be an effective public health strategy for reducing the long-term health impact of the epidemic. Few weight loss trials have been conducted in young adults. It is unclear what weight loss strategies are beneficial in this population. To describe the design and rationale of the NHLBI-sponsored Cell Phone Intervention for You (CITY) study, which is a single center, randomized three-arm trial that compares the impact on weight loss of 1) a behavioral intervention that is delivered almost entirely via cell phone technology (Cell Phone group); and 2) a behavioral intervention delivered mainly through monthly personal coaching calls enhanced by self-monitoring via cell phone (Personal Coaching group), each compared to 3) a usual care, advice-only control condition. A total of 365 community-dwelling overweight/obese adults aged 18-35 years were randomized to receive one of these three interventions for 24 months in parallel group design. Study personnel assessing outcomes were blinded to group assignment. The primary outcome is weight change at 24 [corrected] months. We hypothesize that each active intervention will cause more weight loss than the usual care condition. Study completion is anticipated in 2014. If effective, implementation of the CITY interventions could mitigate the alarming rates of obesity in young adults through promotion of weight loss. ClinicalTrial.gov: NCT01092364. Published by Elsevier Inc.

  14. Weight loss intervention for young adults using mobile technology: design and rationale of a randomized controlled trial – Cell phone Intervention for You (CITY)

    PubMed Central

    Batch, Bryan C.; Tyson, Crystal; Bagwell, Jacqueline; Corsino, Leonor; Intille, Stephen; Lin, Pao-Hwa; Lazenka, Tony; Bennett, Gary; Bosworth, Hayden B.; Voils, Corrine; Grambow, Steven; Sutton, Aziza; Bordogna, Rachel; Pangborn, Matthew; Schwager, Jenifer; Pilewski, Kate; Caccia, Carla; Burroughs, Jasmine; Svetkey, Laura P.

    2014-01-01

    Background The obesity epidemic has spread to young adults, leading to significant public health implications later in adulthood. Intervention in early adulthood may be an effective public health strategy for reducing the long-term health impact of the epidemic. Few weight loss trials have been conducted in young adults. It is unclear what weight loss strategies are beneficial in this population. Purpose To describe the design and rationale of the NHLBI-sponsored Cell Phone Intervention for You (CITY) study, which is a single center, randomized three-arm trial that compares the impact on weight loss of 1) a behavioral intervention that is delivered almost entirely via cell phone technology (Cell Phone group); and 2) a behavioral intervention delivered mainly through monthly personal coaching calls enhanced by self-monitoring via cell phone (Personal Coaching group), each compared to; 3) a usual care, advice-only control condition. Methods A total of 365 community-dwelling overweight/obese adults aged 18–35 years were randomized to receive one of these three interventions for 24 months in parallel group design. Study personnel assessing outcomes were blinded to group assignment. The primary outcome is weight change at 12 months. We hypothesize that each active intervention will cause more weight loss than the usual care condition. Study completion is anticipated in 2014. Conclusions If effective, implementation of the CITY interventions could mitigate the alarming rates of obesity in young adults through promotion of weight loss. PMID:24462568

  15. Self-Administered Computer Therapy for Apraxia of Speech: Two-Period Randomized Control Trial With Crossover.

    PubMed

    Varley, Rosemary; Cowell, Patricia E; Dyson, Lucy; Inglis, Lesley; Roper, Abigail; Whiteside, Sandra P

    2016-03-01

    There is currently little evidence on effective interventions for poststroke apraxia of speech. We report outcomes of a trial of self-administered computer therapy for apraxia of speech. Effects of speech intervention on naming and repetition of treated and untreated words were compared with those of a visuospatial sham program. The study used a parallel-group, 2-period, crossover design, with participants receiving 2 interventions. Fifty participants with chronic and stable apraxia of speech were randomly allocated to 1 of 2 order conditions: speech-first condition versus sham-first condition. Period 1 design was equivalent to a randomized controlled trial. We report results for this period and profile the effect of the period 2 crossover. Period 1 results revealed significant improvement in naming and repetition only in the speech-first group. The sham-first group displayed improvement in speech production after speech intervention in period 2. Significant improvement of treated words was found in both naming and repetition, with little generalization to structurally similar and dissimilar untreated words. Speech gains were largely maintained after withdrawal of intervention. There was a significant relationship between treatment dose and response. However, average self-administered dose was modest for both groups. Future software design would benefit from incorporation of social and gaming components to boost motivation. Single-word production can be improved in chronic apraxia of speech with behavioral intervention. Self-administered computerized therapy is a promising method for delivering high-intensity speech/language rehabilitation. URL: http://orcid.org/0000-0002-1278-0601. Unique identifier: ISRCTN88245643. © 2016 American Heart Association, Inc.

  16. Beyond silence: protocol for a randomized parallel-group trial comparing two approaches to workplace mental health education for healthcare employees.

    PubMed

    Moll, Sandra; Patten, Scott Burton; Stuart, Heather; Kirsh, Bonnie; MacDermid, Joy Christine

    2015-04-16

    Mental illness is a significant and growing problem in Canadian healthcare organizations, leading to tremendous personal, social and financial costs for individuals, their colleagues, their employers and their patients. Early and appropriate intervention is needed, but unfortunately, few workers get the help that they need in a timely way due to barriers related to poor mental health literacy, stigma, and inadequate access to mental health services. Workplace education and training is one promising approach to early identification and support for workers who are struggling. Little is known, however, about what approach is most effective, particularly in the context of healthcare work. The purpose of this study is to compare the impact of a customized, contact-based education approach with standard mental health literacy training on the mental health knowledge, stigmatized beliefs and help-seeking/help-outreach behaviors of healthcare employees. A multi-centre, randomized, two-group parallel group trial design will be adopted. Two hundred healthcare employees will be randomly assigned to one of two educational interventions: Beyond Silence, a peer-led program customized to the healthcare workplace, and Mental Health First Aid, a standardized literacy based training program. Pre, post and 3-month follow-up surveys will track changes in knowledge (mental health literacy), attitudes towards mental illness, and help-seeking/help-outreach behavior. An intent-to-treat, repeated measures analysis will be conducted to compare changes in the two groups over time in terms of the primary outcome of behavior change. Linear regression modeling will be used to explore the extent to which knowledge, and attitudes predict behavior change. Qualitative interviews with participants and leaders will also be conducted to examine process and implementation of the programs. This is one of the first experimental studies to compare outcomes of standard mental health literacy training to an intervention with an added anti-stigma component (using best-practices of contact-based education). Study findings will inform recommendations for designing workplace mental health education to promote early intervention for employees with mental health issues in the context of healthcare work. May 2014 - ClinicalTrials.gov: NCT02158871.

  17. Outcome of patients after lower limb fracture with partial weight bearing postoperatively treated with or without anti-gravity treadmill (alter G®) during six weeks of rehabilitation - a protocol of a prospective randomized trial.

    PubMed

    Henkelmann, Ralf; Schneider, Sebastian; Müller, Daniel; Gahr, Ralf; Josten, Christoph; Böhme, Jörg

    2017-03-14

    Partial or complete immobilization leads to different adjustment processes like higher risk of muscle atrophy or a decrease of general performance. The present study is designed to prove efficacy of the anti-gravity treadmill (alter G®) compared to a standard rehabilitation protocol in patients with tibial plateau (group 1)or ankle fractures (group 2) with six weeks of partial weight bearing of 20 kg. This prospective randomized study will include a total of 60 patients for each group according to predefined inclusion and exclusion criteria. 1:1 randomization will be performed centrally via fax supported by the Clinical Trial Centre Leipzig (ZKS Leipzig). Patients in the treatment arm will be treated with an anti-gravity treadmill (alter G®) instead of physiotherapy. The protocol is designed parallel to standard physiotherapy with a frequency of two to three times of training with the treadmill per week with duration of 20 min for six weeks. Up to date no published randomized controlled trial with an anti-gravity treadmill is available. The findings of this study can help to modify rehabilitation of patients with partial weight bearing due to their injury or postoperative protocol. It will deliver interesting results if an anti-gravity treadmill is useful in rehabilitation in those patients. Further ongoing studies will identify different indications for an anti-gravity treadmill. Thus, in connection with those studies, a more valid statement regarding safety and efficacy is possible. NCT02790229 registered on May 29, 2016.

  18. Response to Placebo in Clinical Epilepsy Trials - Old Ideas and New Insights

    PubMed Central

    Goldenholz, Daniel M.; Goldenholz, Shira R

    2016-01-01

    Randomized placebo controlled trials are a mainstay of modern clinical epilepsy research; the success or failure of innovative therapies depends on proving superiority to a placebo. Consequently, understanding what drives response to placebo (including the “placebo effect”) may facilitate evaluation of new therapies. In this review, part one will explore observations about placebos specific to epilepsy, including the relatively higher placebo response in children, apparent increase in placebo response over the past several decades, geographic variation in placebo effect, relationship to baseline epilepsy characteristics, influence of nocebo on clinical trials, the possible increase in (SUDEP) in placebo arms of trials, and patterns that placebo responses appear to follow in individual patients. Part two will discuss the principal causes of placebo responses, including regression to the mean, anticipation, classical conditioning, the Hawthorne effect, expectations from symbols, and the natural history of disease. Included in part two will be a brief overview of recent advances using simulations from large datasets that have afforded new insights into causes of epilepsy related placebo responses. In part three, new developments in study design will be explored, including sequential parallel comparison, two-way enriched design, time to pre-randomization, delayed start, and cohort reduction techniques. PMID:26921852

  19. Testing for carryover effects after cessation of treatments: a design approach.

    PubMed

    Sturdevant, S Gwynn; Lumley, Thomas

    2016-08-02

    Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover. We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension. Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate. When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation.

  20. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  1. National Combustion Code: Parallel Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.

    2000-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.

  2. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  3. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  4. The energy density distribution of an ideal gas and Bernoulli’s equations

    NASA Astrophysics Data System (ADS)

    Santos, Leonardo S. F.

    2018-05-01

    This work discusses the energy density distribution in an ideal gas and the consequences of Bernoulli’s equation and the corresponding relation for compressible fluids. The aim of this work is to study how Bernoulli’s equation determines the energy flow in a fluid, although Bernoulli’s equation does not describe the energy density itself. The model from molecular dynamic considerations that describes an ideal gas at rest with uniform density is modified to explore the gas in motion with non-uniform density and gravitational effects. The difference between the component of the speed of a particle that is parallel to the gas speed and the gas speed itself is called ‘parallel random speed’. The pressure from the ‘parallel random speed’ is denominated as parallel pressure. The modified model predicts that the energy density is the sum of kinetic and potential gravitational energy densities plus two terms with static and parallel pressures. The application of Bernoulli’s equation and the corresponding relation for compressible fluids in the energy density expression has resulted in two new formulations. For incompressible and compressible gas, the energy density expressions are written as a function of stagnation, static and parallel pressures, without any dependence on kinetic or gravitational potential energy densities. These expressions of the energy density are the main contributions of this work. When the parallel pressure was uniform, the energy density distribution for incompressible approximation and compressible gas did not converge to zero for the limit of null static pressure. This result is rather unusual because the temperature tends to zero for null pressure. When the gas was considered incompressible and the parallel pressure was equal to static pressure, the energy density maintained this unusual behaviour with small pressures. If the parallel pressure was equal to static pressure, the energy density converged to zero for the limit of the null pressure only if the gas was compressible. Only the last situation describes an intuitive behaviour for an ideal gas.

  5. Inefficient conjunction search made efficient by concurrent spoken delivery of target identity.

    PubMed

    Reali, Florencia; Spivey, Michael J; Tyler, Melinda J; Terranova, Joseph

    2006-08-01

    Visual search based on a conjunction of two features typically elicits reaction times that increase linearly as a function of the number of distractors, whereas search based on a single feature is essentially unaffected by set size. These and related findings have often been interpreted as evidence of a serial search stage that follows a parallel search stage. However, a wide range of studies has been showing a form of blending of these two processes. For example, when a spoken instruction identifies the conjunction target concurrently with the visual display, the effect of set size is significantly reduced, suggesting that incremental linguistic processing of the first feature adjective and then the second feature adjective may facilitate something approximating a parallel extraction of objects during search for the target. Here, we extend these results to a variety of experimental designs. First, we replicate the result with a mixed-trials design (ruling out potential strategies associated with the blocked design of the original study). Second, in a mixed-trials experiment, the order of adjective types in the spoken query varies randomly across conditions. In a third experiment, we extend the effect to a triple-conjunction search task. A fourth (control) experiment demonstrates that these effects are not due to an efficient odd-one-out search that ignores the linguistic input. This series of experiments, along with attractor-network simulations of the phenomena, provide further evidence toward understanding linguistically mediated influences in real-time visual search processing.

  6. Effect of Cosmos caudatus (Ulam raja) supplementation in patients with type 2 diabetes: Study protocol for a randomized controlled trial.

    PubMed

    Cheng, Shi-Hui; Ismail, Amin; Anthony, Joseph; Ng, Ooi Chuan; Hamid, Azizah Abdul; Yusof, Barakatun-Nisak Mohd

    2016-02-27

    Type 2 diabetes mellitus is a major health threat worldwide. Cosmos caudatus is one of the medicinal plants used to treat type 2 diabetes. Therefore, this study aims to determine the effectiveness and safety of C. caudatus in patients with type 2 diabetes. Metabolomic approach will be carried out to compare the metabolite profiles between C. Caudatus treated diabetic patients and diabetic controls. This is a single-center, randomized, controlled, two-arm parallel design clinical trial that will be carried out in a tertiary hospital in Malaysia. In this study, 100 patients diagnosed with type 2 diabetes will be enrolled. Diabetic patients who meet the eligibility criteria will be randomly allocated to two groups, which are diabetic C. caudatus treated(U) group and diabetic control (C) group. Primary and secondary outcomes will be measured at baseline, 4, 8, and 12 weeks. The serum and urine metabolome of both groups will be examined using proton NMR spectroscopy. The study will be the first randomized controlled trial to assess whether C. caudatus can confer beneficial effect in patients with type 2 diabetes. The results of this trial will provide clinical evidence on the effectiveness and safety of C. caudatus in patients with type 2 diabetes. ClinicalTrials.gov identifier: NCT02322268.

  7. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  8. Concurrent Collections (CnC): A new approach to parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knobe, Kathleen

    2010-05-07

    A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicitymore » or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipment’s Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.« less

  9. Concurrent Collections (CnC): A new approach to parallel programming

    ScienceCinema

    Knobe, Kathleen

    2018-04-16

    A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicity or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipment’s Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.

  10. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  11. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  12. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  13. A comparison between orthogonal and parallel plating methods for distal humerus fractures: a prospective randomized trial.

    PubMed

    Lee, Sang Ki; Kim, Kap Jung; Park, Kyung Hoon; Choy, Won Sik

    2014-10-01

    With the continuing improvements in implants for distal humerus fractures, it is expected that newer types of plates, which are anatomically precontoured, thinner and less irritating to soft tissue, would have comparable outcomes when used in a clinical study. The purpose of this study was to compare the clinical and radiographic outcomes in patients with distal humerus fractures who were treated with orthogonal and parallel plating methods using precontoured distal humerus plates. Sixty-seven patients with a mean age of 55.4 years (range 22-90 years) were included in this prospective study. The subjects were randomly assigned to receive 1 of 2 treatments: orthogonal or parallel plating. The following results were assessed: operating time, time to fracture union, presence of a step or gap at the articular margin, varus-valgus angulation, functional recovery, and complications. No intergroup differences were observed based on radiological and clinical results between the groups. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes, mean operation time, union time, or complication rates. There were no cases of fracture nonunion in either group; heterotrophic ossification was found 3 patients in orthogonal plating group and 2 patients in parallel plating group. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes or complication rates. However, orthogonal plating method may be preferred in cases of coronal shear fractures, where posterior to anterior fixation may provide additional stability to the intraarticular fractures. Additionally, parallel plating method may be the preferred technique used for fractures that occur at the most distal end of the humerus.

  14. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  15. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  16. Designing Feature and Data Parallel Stochastic Coordinate Descent Method forMatrix and Tensor Factorization

    DTIC Science & Technology

    2016-05-11

    AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or   any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386

  17. Type synthesis for 4-DOF parallel press mechanism using GF set theory

    NASA Astrophysics Data System (ADS)

    He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong

    2015-07-01

    Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.

  18. Design of on-board parallel computer on nano-satellite

    NASA Astrophysics Data System (ADS)

    You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li

    2007-11-01

    This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.

  19. High-dose N-acetylcysteine in the prevention of COPD exacerbations: rationale and design of the PANTHEON Study.

    PubMed

    Zheng, Jin-Ping; Wen, Fu-Qiang; Bai, Chun-Xue; Wan, Huan-Ying; Kang, Jian; Chen, Ping; Yao, Wan-Zhen; Ma, Li-Jun; Xia, Qi-Kui; Gao, Yi; Zhong, Nan-Shan

    2013-04-01

    Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation; from a pathophysiological point of view it involves many components, including mucus hypersecretion, oxidative stress and inflammation. N-acetylcysteine (NAC) is a mucolytic agent with antioxidant and anti-inflammatory properties. Long-term efficacy of NAC 600mg/d in COPD is controversial; a dose-effect relationship has been demonstrated, but at present it is not known whether a higher dose provides clinical benefits. The PANTHEON Study is a prospective, ICS stratified, randomized, double-blind, placebo-controlled, parallel-group, multi-center trial designed to assess the efficacy and safety of high-dose (1200 mg/daily) NAC treatment for one year in moderate-to-severe COPD patients. The primary endpoint is the annual exacerbation rate. Secondary endpoints include recurrent exacerbations hazard ratio, time to first exacerbation, as well as quality of life and pulmonary function. The hypothesis, design and methodology are described and baseline characteristics of recruited patients are presented. 1006 COPD patients (444 treated with maintenance ICS, 562 ICS naive, aged 66.27±8.76 yrs, average post-bronchodilator FEV1 48.95±11.80 of predicted) have been randomized at 34 hospitals in China. Final results of this study will provide objective data on the effects of high-dose (1200 mg/daily) long-term NAC treatment in the prevention of COPD exacerbations and other outcome variables.

  20. Maternal Opioid Treatment: Human Experimental Research (MOTHER) – Approach, Issues, and Lessons Learned

    PubMed Central

    Jones, Hendrée E.; Fischer, Gabriele; Heil, Sarah H.; Kaltenbach, Karol; Martin, Peter R.; Coyle, Mara G.; Selby, Peter; Stine, Susan M.; O’Grady, Kevin E.; Arria, Amelia M.

    2015-01-01

    Aims The Maternal Opioid Treatment: Human Experimental Research (MOTHER) project, an eight-site randomized, double-blind, double-dummy, flexible-dosing, parallel-group clinical trial is described. This study is the most current – and single most comprehensive – research effort to investigate the safety and efficacy of maternal and prenatal exposure to methadone and buprenorphine. Methods The MOTHER study design is outlined, and its basic features are presented. Conclusions At least seven important lessons have been learned from the MOTHER study: (1) an interdisciplinary focus improves the design and methods of a randomized clinical trial; (2) multiple sites in a clinical trial present continuing challenges to the investigative team due to variations in recruitment goals, patient populations, and hospital practices that in turn differentially impact recruitment rates, treatment compliance, and attrition; (3) study design and protocols must be flexible in order to meet the unforeseen demands of both research and clinical management; (4) staff turnover needs to be addressed with a proactive focus on both hiring and training; (5) the implementation of a protocol for the treatment of a particular disorder may identify important ancillary clinical issues worthy of investigation; (6) timely tracking of data in a multi-site trial is both demanding and unforgiving; and, (7) complex multi-site trials pose unanticipated challenges that complicate the choice of statistical methods, thereby placing added demands on investigators to effectively communicate their results. PMID:23106924

  1. Dual sensory loss: development of a dual sensory loss protocol and design of a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Dual sensory loss (DSL) has a negative impact on health and wellbeing and its prevalence is expected to increase due to demographic aging. However, specialized care or rehabilitation programs for DSL are scarce. Until now, low vision rehabilitation does not sufficiently target concurrent impairments in vision and hearing. This study aims to 1) develop a DSL protocol (for occupational therapists working in low vision rehabilitation) which focuses on optimal use of the senses and teaches DSL patients and their communication partners to use effective communication strategies, and 2) describe the multicenter parallel randomized controlled trial (RCT) designed to test the effectiveness and cost-effectiveness of the DSL protocol. Methods/design To develop a DSL protocol, literature was reviewed and content was discussed with professionals in eye/ear care (interviews/focus groups) and DSL patients (interviews). A pilot study was conducted to test and confirm the DSL protocol. In addition, a two-armed international multi-center RCT will evaluate the effectiveness and cost-effectiveness of the DSL protocol compared to waiting list controls, in 124 patients in low vision rehabilitation centers in the Netherlands and Belgium. Discussion This study provides a treatment protocol for rehabilitation of DSL within low vision rehabilitation, which aims to be a valuable addition to the general low vision rehabilitation care. Trial registration Netherlands Trial Register (NTR) identifier: NTR2843 PMID:23941667

  2. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  3. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  4. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.

  5. Rationale, study design, and implementation of the ACS1 study: effect of azilsartan on circadian and sleep blood pressure as compared with amlodipine.

    PubMed

    Kario, Kazuomi; Hoshide, Satoshi

    2014-06-01

    The ACS1 (Azilsartan Circadian and Sleep Pressure - the first study) is a multicenter, randomized, open-label, two parallel-group study carried out to investigate the efficacy of an 8-week oral treatment with azilsartan 20 mg in comparison with amlodipine 5 mg. The patients with stage I or II primary hypertension will be randomly assigned to either an azilsartan group (n=350) or an amlodipine group (n=350). The primary endpoint is a change in nocturnal systolic blood pressure (BP) as measured by ambulatory BP monitoring at the end of follow-up relative to the baseline level during the run-in period. In addition, we will carry out the same analysis after dividing four different nocturnal BP dipping statuses (extreme-dippers, dippers, nondipper, and risers). The findings of this study will help in establishing an appropriate antihypertensive treatment for hypertensive patients with a disrupted circadian BP rhythm.

  6. Rationale, study design, and implementation of the ACS1 study: effect of azilsartan on circadian and sleep blood pressure as compared with amlodipine

    PubMed Central

    Hoshide, Satoshi

    2014-01-01

    Objective The ACS1 (Azilsartan Circadian and Sleep Pressure – the first study) is a multicenter, randomized, open-label, two parallel-group study carried out to investigate the efficacy of an 8-week oral treatment with azilsartan 20 mg in comparison with amlodipine 5 mg. Materials and methods The patients with stage I or II primary hypertension will be randomly assigned to either an azilsartan group (n=350) or an amlodipine group (n=350). The primary endpoint is a change in nocturnal systolic blood pressure (BP) as measured by ambulatory BP monitoring at the end of follow-up relative to the baseline level during the run-in period. In addition, we will carry out the same analysis after dividing four different nocturnal BP dipping statuses (extreme-dippers, dippers, nondipper, and risers). Conclusion The findings of this study will help in establishing an appropriate antihypertensive treatment for hypertensive patients with a disrupted circadian BP rhythm. PMID:24637789

  7. A double-blind randomized placebo-controlled feasibility study evaluating individualized homeopathy in managing pain of knee osteoarthritis.

    PubMed

    Koley, Munmun; Saha, Subhranil; Ghosh, Shubhamoy

    2015-07-01

    Few homeopathic complexes seemed to produce significant effects in osteoarthritis; still, individualized homeopathy remained untested. We evaluated the feasibility of conducting an efficacy trial of individualized homeopathy in osteoarthritis. A prospective, parallel-arm, double-blind, randomized, placebo-controlled pilot study was conducted from January to October 2014 involving 60 patients (homeopathy, n = 30; placebo, n = 30) who were suffering from acute painful episodes of knee osteoarthritis and visiting the outpatient clinic of Mahesh Bhattacharyya Homeopathic Medical College and Hospital, West Bengal, India. Statistically significant reduction was achieved in 3 visual analog scales (measuring pain, stiffness, and loss of function) and Osteoarthritis Research Society International scores in both groups over 2 weeks (P < .05); however, group differences were not significant (P > .05). Overall, homeopathy did not appear to be superior to placebo; still, further rigorous evaluation in this design involving a larger sample size seems feasible in future. Clinical Trials Registry, India (CTRI/2014/05/004589). © The Author(s) 2015.

  8. Optoelectronic-cache memory system architecture.

    PubMed

    Chiarulli, D M; Levitan, S P

    1996-05-10

    We present an investigation of the architecture of an optoelectronic cache that can integrate terabit optical memories with the electronic caches associated with high-performance uniprocessors and multiprocessors. The use of optoelectronic-cache memories enables these terabit technologies to provide transparently low-latency secondary memory with frame sizes comparable with disk pages but with latencies that approach those of electronic secondary-cache memories. This enables the implementation of terabit memories with effective access times comparable with the cycle times of current microprocessors. The cache design is based on the use of a smart-pixel array and combines parallel free-space optical input-output to-and-from optical memory with conventional electronic communication to the processor caches. This cache and the optical memory system to which it will interface provide a large random-access memory space that has a lower overall latency than that of magnetic disks and disk arrays. In addition, as a consequence of the high-bandwidth parallel input-output capabilities of optical memories, fault service times for the optoelectronic cache are substantially less than those currently achievable with any rotational media.

  9. Electroacupuncture for chemotherapy-induced peripheral neuropathy: study protocol for a pilot multicentre randomized, patient-assessor-blinded, controlled trial

    PubMed Central

    2013-01-01

    Background Chemotherapy-induced peripheral neuropathy (CIPN) is the main dose-limiting side effect of neurotoxic chemotherapeutic agents. CIPN can lead not only to loss of physical function, difficulties in activities of daily living (ADLs), and decreased quality of life, but also to dose reduction, delay or even cessation of treatment. Currently, there are few proven effective treatments for CIPN. This randomized controlled clinical trial is designed to evaluate the effects and safety of electroacupuncture (EA) for patients with CIPN. Methods/design This is a multicenter, two-armed, parallel-design, patient-assessor-blinded, randomized, sham-controlled clinical trial. Forty eligible patients with CIPN will be randomized in a ratio of 1:1 to the EA or sham EA arms. During the treatment phase, patients will undergo eight sessions of verum EA or sham EA twice weekly for four weeks, and then will be followed-up for eight weeks. Electrical stimulation in the EA group will consist of a mixed frequency of 2/120 Hz and 80% of bearable intensity. Sham EA will be applied to non-acupoints, with shallow needle insertion and no current. All outcomes and analyses of results will be assessed by researchers blinded to treatment allocation. The effects of EA on CIPN will be evaluated according to both subjective and objective outcome measures. The primary outcome measure will be the European Organization for Research and Treatment of Cancer (EORTC) quality of life questionnaire to assess CIPN (QLQ-CIPN20). The secondary outcome measures will be the results on the numerical rating scale, the Semmes-Weinstein monofilament test, the nerve conduction study, and the EORTC QLQ-C30, as well as the patient’s global impression of change and adverse events. Safety will be assessed at each visit. Discussion The results of this on-going study will provide clinical evidence for the effects and safety of EA for CIPN compared with sham EA. Trial registration Clinical Research Information Service: KCT0000506 PMID:23945074

  10. Supercomputing on massively parallel bit-serial architectures

    NASA Technical Reports Server (NTRS)

    Iobst, Ken

    1985-01-01

    Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

  11. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  12. Transcranial direct current stimulation for depression in Alzheimer's disease: study protocol for a randomized controlled trial.

    PubMed

    Narita, Zui; Yokoi, Yuma

    2017-06-19

    Patients with Alzheimer's disease frequently elicit neuropsychiatric symptoms as well as cognitive deficits. Above all, depression is one of the most common neuropsychiatric symptoms in Alzheimer's disease but antidepressant drugs have not shown significant beneficial effects on it. Moreover, electroconvulsive therapy has not ensured its safety for potential severe adverse events although it does show beneficial clinical effect. Transcranial direct current stimulation can be the safe alternative of neuromodulation, which applies weak direct electrical current to the brain. Although transcranial direct current stimulation has plausible evidence for its effect on depression in young adult patients, no study has explored it in older subjects with depression in Alzheimer's disease. Therefore, we present a study protocol designed to evaluate the safety and clinical effect of transcranial direct current stimulation on depression in Alzheimer's disease in subjects aged over 65 years. This is a two-arm, parallel-design, randomized controlled trial, in which patients and assessors will be blinded. Subjects will be randomized to either an active or a sham transcranial direct current stimulation group. Participants in both groups will be evaluated at baseline, immediately, and 2 weeks after the intervention. This study investigates the safety and effect of transcranial direct current stimulation that may bring a significant impact on both depression and cognition in patients with Alzheimer's disease, and may be useful to enhance their quality of life. ClinicalTrials.gov, NCT02351388 . Registered on 27 January 2015. Last updated on 30 May 2016.

  13. Plaque and gingivitis reduction efficacy of an advanced pulsonic toothbrush: a 4-week randomized and controlled clinical trial.

    PubMed

    Sharma, Naresh C; Qaqish, Jimmy G; He, Tao; Walters, Patricia A; Grender, Julie M; Biesbrock, Aaron R

    2010-12-01

    To compare the safety and efficacy of a novel sonic power toothbrush and a manual toothbrush in the reduction of gingivitis and plaque over a 4-week period. This study employed a randomized two treatment, examiner-blinded, parallel group design. Subjects with evidence of gingivitis were randomly assigned to 4 weeks' twice daily home use of either the Oral-B Pulsonic sonic toothbrush or an ADA reference manual toothbrush. At baseline (Visit 1) and again after product use at Week 4, subjects received gingivitis evaluations with the Modified Gingival Index (MGI) and Gingival Bleeding Index (GBI) examinations, followed by plaque assessment using the Rustogi Modified Navy Plaque Index (RMNPI). For 12 hours before both visits, subjects abstained from all oral hygiene, and ceased eating, drinking and smoking 4 hours prior. Both brushes significantly reduced gingivitis, gingival bleeding and plaque compared with baseline, and were well-tolerated by the 129 subjects completing the study. The sonic toothbrush was statistically significantly (P < 0.0001) more effective than the manual brush, with greater relative mean reductions in MGI, GBI and RMNPI of 11.9%, 62.3% and 46.5%, respectively.

  14. Cocoa polyphenols enhance positive mood states but not cognitive performance: a randomized, placebo-controlled trial.

    PubMed

    Pase, Matthew P; Scholey, Andrew B; Pipingas, Andrew; Kras, Marni; Nolidin, Karen; Gibbs, Amy; Wesnes, Keith; Stough, Con

    2013-05-01

    This study aimed to examine the acute and sub-chronic effects of cocoa polyphenols on cognition and mood. In a randomized, double-blind study, healthy middle-aged participants received a dark chocolate drink mix standardized to contain 500 mg, 250 mg or 0 mg of polyphenols (placebo) in a parallel-groups design. Participants consumed their assigned treatment once daily for 30 days. Cognition was measured with the Cognitive Drug Research system and self-rated mood with the Bond-Lader Visual Analogue Scale. Participants were tested at baseline, at 1, 2.5 and 4 h after a single acute dose and again after receiving 30 days of treatment. In total, 72 participants completed the trial. After 30 days, the high dose of treatment significantly increased self-rated calmness and contentedness relative to placebo. Mood was unchanged by treatment acutely while cognition was unaffected by treatment at all time points. This randomized controlled trial is perhaps the first to demonstrate the positive effects of cocoa polyphenols on mood in healthy participants. This provides a rationale for exploring whether cocoa polyphenols can ameliorate the symptoms associated with clinical anxiety or depression.

  15. Quality of reporting of randomized clinical trials in implant dentistry. A systematic review on critical aspects in design, outcome assessment and clinical relevance.

    PubMed

    Cairo, Francesco; Sanz, Ignacio; Matesanz, Paula; Nieri, Michele; Pagliaro, Umberto

    2012-02-01

    The aim of this systematic review (SR) was to assess the quality of reporting randomized clinical trials (RCTs) in the field of implant dentistry, its evolution over time and the possible relations between quality items and reported outcomes. RCTs in implant dentistry were retrieved through electronic and hand searches. Risk of bias in individual studies was assessed focusing on study design, outcome assessment and clinical relevance. Associations between quality items and year of publication of RCTs or reporting of statistically significant outcomes were tested. Among the 495 originally screened manuscripts published from 1989 to April 2011, 276 RCTs were assessed in this SR; 59% of them were published between 2006 and 2011. RCTs were mainly parallel (65%), with a single centre (83%) and a superiority design (88%). Trials in implant dentistry showed several methodological flaws: only 37% showed a random sequence generation at low risk of bias, 75% did not provide information on allocation concealment, only 12% performed a correct sample size calculation, the examiner was blind solely in 42% of studies where blinding was feasible. In addition, only 21% of RCTs declared operator experience and 31% reported patient-related outcomes. Many quality items improved over time. Allocation concealment at high risk of bias (p = 0.0125), no information on drop-out (p = 0.0318) and lack of CONSORT adherence (p = 0.0333) were associated with statistically significant reported outcomes. The overall quality of reporting of RCTs in implant dentistry is poor and only partially improved in the last years. Caution is suggested when interpreting these RCTs since risk of bias was associated with higher chance of reporting of statistically significant results. © 2012 John Wiley & Sons A/S.

  16. Design and rationale for the Effects of Ticagrelor and Clopidogrel in Patients with Peripheral Artery Disease (EUCLID) trial.

    PubMed

    Berger, Jeffrey S; Katona, Brian G; Jones, W Schuyler; Patel, Manesh R; Norgren, Lars; Baumgartner, Iris; Blomster, Juuso; Mahaffey, Kenneth W; Held, Peter; Millegård, Marcus; Heizer, Gretchen; Reist, Craig; Fowkes, F Gerry; Hiatt, William R

    2016-05-01

    Despite overwhelming data demonstrating the efficacy of antiplatelet therapy in heart disease and stroke, data in peripheral artery disease (PAD) are less compelling. Aspirin has modest evidence supporting a reduction in cardiovascular events in patients with PAD, whereas clopidogrel monotherapy may be more effective in PAD. Ticagrelor, a potent, reversibly binding P2Y12 receptor antagonist, is beneficial in patients with acute coronary syndrome and prior myocardial infarction. The EUCLID trial is designed to address the need for effective antiplatelet therapy in PAD to decrease the risk of cardiovascular events. EUCLID is a randomized, double-blind, parallel-group, multinational clinical trial designed to evaluate the efficacy and safety of ticagrelor compared with clopidogrel for the prevention of major adverse cardiovascular events in subjects with symptomatic PAD. Subjects with established PAD will be randomized in a 1:1 fashion to ticagrelor 90 mg twice daily or clopidogrel 75 mg daily. The primary end point is a composite of cardiovascular death, myocardial infarction, or ischemic stroke. Other end points address limb events including acute leg ischemia, need for revascularization, disease progression by ankle-brachial index, and quality of life. The primary safety objective is Thrombolysis in Myocardial Infarction-defined major bleeding. Recruitment began in December 2012 and was completed in March 2014; 13,887 patients were randomized. The trial will continue until at least 1,364 adjudicated primary end points occur. The EUCLID study is investigating whether treatment with ticagrelor versus clopidogrel, given as antiplatelet monotherapy, will reduce the incidence of cardiovascular and limb-specific events in patients with symptomatic PAD. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Feasibility and Efficacy of an mHealth Game for Managing Anxiety: "Flowy" Randomized Controlled Pilot Trial and Design Evaluation.

    PubMed

    Pham, Quynh; Khatib, Yasmin; Stansfeld, Stephen; Fox, Simon; Green, Tobias

    2016-02-01

    Meeting the complex needs of patients with chronic common mental health disorders (CMHDs) may be the greatest challenge facing organized medical practice. On the basis of a well-established and proven theoretical foundation for controlled respiration as a behavioral intervention for CMHDs, as well as preliminary evidence that gamification can improve health outcomes through increasing patient engagement, this randomized controlled pilot study evaluated the feasibility and clinical efficacy of a mobile health game called "Flowy" ( www.flowygame.com ) that digitally delivered breathing retraining exercises for anxiety, panic, and hyperventilation symptom management. We designed an unblinded, Web-based, parallel-group randomized controlled trial focusing on feasibility, clinical efficacy, and design proof of concept. In the intervention condition (n = 31), participants received free access to "Flowy" for 4 weeks. In the control condition (n = 32), participants were placed on a waitlist for 4 weeks before being offered free access to "Flowy." Online measurements using psychological self-report questionnaires were made at 2 and 4 weeks post-baseline. At trial conclusion, participants found "Flowy" acceptable as an anxiety management intervention. "Flowy" engaged participants sufficiently to endorse proactive gameplay. Intent-to-treat analysis revealed a reduction in anxiety, panic, and self-report hyperventilation scores in both trial arms, with the intervention arm experiencing greater quality of life. Participants perceived "Flowy" as a fun and useful intervention, proactively used "Flowy" as part of their care, and would recommend "Flowy" to family and friends. Our results suggest that a digital delivery of breathing retraining exercises through a mobile health game can manage anxiety, panic, and hyperventilation symptoms associated with CMHDs.

  18. Slice-selective RF pulses for in vivo B1+ inhomogeneity mitigation at 7 tesla using parallel RF excitation with a 16-element coil.

    PubMed

    Setsompop, Kawin; Alagappan, Vijayanand; Gagoski, Borjan; Witzel, Thomas; Polimeni, Jonathan; Potthast, Andreas; Hebrank, Franz; Fontius, Ulrich; Schmitt, Franz; Wald, Lawrence L; Adalsteinsson, Elfar

    2008-12-01

    Slice-selective RF waveforms that mitigate severe B1+ inhomogeneity at 7 Tesla using parallel excitation were designed and validated in a water phantom and human studies on six subjects using a 16-element degenerate stripline array coil driven with a butler matrix to utilize the eight most favorable birdcage modes. The parallel RF waveform design applied magnitude least-squares (MLS) criteria with an optimized k-space excitation trajectory to significantly improve profile uniformity compared to conventional least-squares (LS) designs. Parallel excitation RF pulses designed to excite a uniform in-plane flip angle (FA) with slice selection in the z-direction were demonstrated and compared with conventional sinc-pulse excitation and RF shimming. In all cases, the parallel RF excitation significantly mitigated the effects of inhomogeneous B1+ on the excitation FA. The optimized parallel RF pulses for human B1+ mitigation were only 67% longer than a conventional sinc-based excitation, but significantly outperformed RF shimming. For example the standard deviations (SDs) of the in-plane FA (averaged over six human studies) were 16.7% for conventional sinc excitation, 13.3% for RF shimming, and 7.6% for parallel excitation. This work demonstrates that excitations with parallel RF systems can provide slice selection with spatially uniform FAs at high field strengths with only a small pulse-duration penalty. (c) 2008 Wiley-Liss, Inc.

  19. [Clinical research. XIII. Research design contribution in the structured revision of an article].

    PubMed

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2013-01-01

    The quality of information obtained in accordance to research design is integrated to the revision structured in relation to the causality model, used in the article "Reduction in the Incidence of Nosocomial Pneumonia Poststroke by Using the 'Turn-mob' Program", which corresponds to a clinical trial design. Points to identify and analyze are ethical issues in order to safeguard the security and respect for patients, randomization that seek to create basal homogeneous groups, subjects with the same probability of receiving any of the maneuvers in comparison, with the same pre maneuver probability of adherence, and which facilitate the blinding of outcome measurement and the distribution between groups of subjects with the same probability of leaving the study for reasons beyond the maneuvers. Other aspects are the relativity of comparison, the blinding of the maneuver, the parallel application of comparative maneuver, early stopping, and analysis according to the degree of adherence. The analysis in accordance with the design is complementary, since it is done based on the architectural model of causality, and the statistical and clinical relevance consideration.

  20. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  1. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  2. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  3. Family, Community and Clinic Collaboration to Treat Overweight and Obese Children: Stanford GOALS -- a Randomized Controlled Trial of a Three-Year, Multi-Component, Multi-Level, Multi-Setting Intervention

    PubMed Central

    Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.

    2013-01-01

    Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. PMID:24028942

  4. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  5. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  6. Xyce parallel electronic simulator users guide, version 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  7. Xyce parallel electronic simulator users' guide, Version 6.0.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  8. Xyce parallel electronic simulator users guide, version 6.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  9. Quantity and source of dietary protein influence metabolite production by gut microbiota and rectal mucosa gene expression: a randomized, parallel, double-blind trial in overweight humans.

    PubMed

    Beaumont, Martin; Portune, Kevin Joseph; Steuer, Nils; Lan, Annaïg; Cerrudo, Victor; Audebert, Marc; Dumont, Florent; Mancano, Giulia; Khodorova, Nadezda; Andriamihaja, Mireille; Airinei, Gheorghe; Tomé, Daniel; Benamouzig, Robert; Davila, Anne-Marie; Claus, Sandrine Paule; Sanz, Yolanda; Blachier, François

    2017-10-01

    Background: Although high-protein diets (HPDs) are frequently consumed for body-weight control, little is known about the consequences for gut microbiota composition and metabolic activity and for large intestine mucosal homeostasis. Moreover, the effects of HPDs according to the source of protein need to be considered in this context. Objective: The objective of this study was to evaluate the effects of the quantity and source of dietary protein on microbiota composition, bacterial metabolite production, and consequences for the large intestinal mucosa in humans. Design: A randomized, double-blind, parallel-design trial was conducted in 38 overweight individuals who received a 3-wk isocaloric supplementation with casein, soy protein, or maltodextrin as a control. Fecal and rectal biopsy-associated microbiota composition was analyzed by 16S ribosomal DNA sequencing. Fecal, urinary, and plasma metabolomes were assessed by 1 H-nuclear magnetic resonance. Mucosal transcriptome in rectal biopsies was determined with the use of microarrays. Results: HPDs did not alter the microbiota composition, but induced a shift in bacterial metabolism toward amino acid degradation with different metabolite profiles according to the protein source. Correlation analysis identified new potential bacterial taxa involved in amino acid degradation. Fecal water cytotoxicity was not modified by HPDs, but was associated with a specific microbiota and bacterial metabolite profile. Casein and soy protein HPDs did not induce inflammation, but differentially modified the expression of genes playing key roles in homeostatic processes in rectal mucosa, such as cell cycle or cell death. Conclusions: This human intervention study shows that the quantity and source of dietary proteins act as regulators of gut microbiota metabolite production and host gene expression in the rectal mucosa, raising new questions on the impact of HPDs on the large intestine mucosa homeostasis. This trial was registered at clinicaltrials.gov as NCT02351297. © 2017 American Society for Nutrition.

  10. Low-phytate wholegrain bread instead of high-phytate wholegrain bread in a total diet context did not improve iron status of healthy Swedish females: a 12-week, randomized, parallel-design intervention study.

    PubMed

    Hoppe, Michael; Ross, Alastair B; Svelander, Cecilia; Sandberg, Ann-Sofie; Hulthén, Lena

    2018-05-23

    To investigate the effects of eating wholegrain rye bread with high or low amounts of phytate on iron status in women under free-living conditions. In this 12-week, randomized, parallel-design intervention study, 102 females were allocated into two groups, a high-phytate-bread group or a low-phytate-bread group. These two groups were administered: 200 g of blanched wholegrain rye bread/day, or 200 g dephytinized wholegrain rye bread/day. The bread was administered in addition to their habitual daily diet. Iron status biomarkers and plasma alkylresorcinols were analyzed at baseline and post-intervention. Fifty-five females completed the study. In the high-phytate-bread group (n = 31) there was no change in any of the iron status biomarkers after 12 weeks of intervention (p > 0.05). In the low-phytate bread group (n = 24) there were significant decreases in both ferritin (mean = 12%; from 32 ± 7 to 27 ± 6 µg/L, geometric mean ± SEM, p < 0.018) and total body iron (mean = 12%; from 6.9 ± 1.4 to 5.4 ± 1.1 mg/kg, p < 0.035). Plasma alkylresorcinols indicated that most subjects complied with the intervention. In Swedish females of reproductive age, 12 weeks of high-phytate wholegrain bread consumption had no effect on iron status. However, consumption of low-phytate wholegrain bread for 12 weeks resulted in a reduction of markers of iron status. Although single-meal studies clearly show an increase in iron bioavailability from dephytinization of cereals, medium-term consumption of reduced phytate bread under free-living conditions suggests that this strategy does not work to improve iron status in healthy women of reproductive age.

  11. A Randomized Single Blind Parallel Group Study Comparing Monoherbal Formulation Containing Holarrhena Antidysenterica Extract with Mesalamine in Chronic Ulcerative Colitis Patients

    PubMed Central

    Johari, Sarika; Gandhi, Tejal

    2016-01-01

    Background: Incidences of side effects and relapses are very common in chronic ulcerative colitis patients after termination of the treatment. Aims and Objectives: This study aims to compare the treatment with monoherbal formulation of Holarrhena antidysenterica with Mesalamine in chronic ulcerative colitis patients with special emphasis to side effects and relapse. Settings and Design: Patients were enrolled from an Ayurveda Hospital and a private Hospital, Gujarat. The study was randomized, parallel group and single blind design. Materials and Methods: The protocol was approved by Institutional Human Research Ethics Committee of Anand Pharmacy College on 23rd Jan 2013. Three groups (n = 10) were treated with drug Mesalamine (Group I), monoherbal tablet (Group II) and combination of both (Group III) respectively. Baseline characteristics, factors affecting quality of life, chronicity of disease, signs and symptoms, body weight and laboratory investigations were recorded. Side effects and complications developed, if any were recorded during and after the study. Statistical Analysis Used: Results were expressed as mean ± SEM. Data was statistically evaluated using t-test, Wilcoxon test, Mann Whitney U test, Kruskal Wallis test and ANOVA, wherever applicable, using GraphPad Prism 6. Results: All the groups responded positively to the treatments. All the patients were positive for occult blood in stool which reversed significantly after treatment along with rise in hemoglobin. Patients treated with herbal tablets alone showed maximal reduction in abdominal pain, diarrhea, and bowel frequency and stool consistency scores than Mesalamine treated patients. Treatment with herbal tablet alone and in combination with Mesalamine significantly reduced the stool infection. Patients treated with herbal drug alone and in combination did not report any side effects, relapse or complications while 50% patients treated with Mesalamine exhibited the relapse with diarrhea and flatulence after drug withdrawal. Conclusion: Thus, monoherbal formulation alone and with Mesalamine was efficacious than Mesalamine alone in UC. PMID:28182023

  12. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.

  13. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.

    1995-01-01

    High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.

  14. Effectiveness of a Web-Based Intervention to Reduce Alcohol Consumption among French Hazardous Drinkers: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Guillemont, Juliette; Cogordan, Chloé; Nalpas, Bertrand; Nguyen-Thanh, Vi?t; Richard, Jean-Baptiste; Arwidson, Pierre

    2017-01-01

    This study aims to evaluate the effectiveness of a web-based intervention to reduce alcohol consumption among hazardous drinkers. A two-group parallel randomized controlled trial was conducted among adults identified as hazardous drinkers according to the Alcohol Use Disorders Identification Test. The intervention delivers personalized normative…

  15. Random Walk Method for Potential Problems

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    2002-01-01

    A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.

  16. Effects of royal jelly supplementation on glycemic control and oxidative stress factors in type 2 diabetic female: a randomized clinical trial.

    PubMed

    Pourmoradian, Samira; Mahdavi, Reza; Mobasseri, Majid; Faramarzi, Elnaz; Mobasseri, Mehrnoosh

    2014-05-01

    It has been proposed that royal jelly has antioxidant properties and may improve oxidative stress and glycemic control. Therefore, we investigated the effects of royal jelly supplementation in diabetic females. In this pilot, parallel design randomized clinical trial, 50 female volunteers with type 2 diabetes were randomly allocated to the supplemented (25, cases) and placebo (25, cases) groups, based on random block procedure produced by Random Allocation Software, given a daily dose of 1,000 mg royal jelly soft gel or placebo, respectively, for 8 weeks. Before and after intervention, glycemic control indices, antioxidant and oxidative stress factors were measured. After royal jelly supplementation, the mean fasting blood glucose decreased remarkably (163.05±42.51 mg/dL vs. 149.68±42.7 mg/dL). Royal jelly supplementation resulted in significant reduction in the mean serum glycosylated hemoglobin levels (8.67%±2.24% vs. 7.05%±1.45%, P=0.001) and significant elevation in the mean insulin concentration (70.28±29.16 pmol/L vs. 86.46±27.50 pmol/L, P=0.01). Supplementation significantly increased erythrocyte superoxidase dismutase and glutathione peroxidase activities and decreased malondialdehyde levels (P<0.05). At the end of study, the mean total antioxidant capacity elevated insignificantly in both groups. On the basis of our findings, it seems that royal jelly supplementation may be beneficial in controlling diabetes outcomes. Further studies with larger sample size are warranted.

  17. National Combustion Code Parallel Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Benyo, Theresa (Technical Monitor)

    2002-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.

  18. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  19. Design of the NL-ENIGMA study: Exploring the effect of Souvenaid on cerebral glucose metabolism in early Alzheimer's disease.

    PubMed

    Scheltens, Nienke M E; Kuyper, Ingrid S; Boellaard, Ronald; Barkhof, Frederik; Teunissen, Charlotte E; Broersen, Laus M; Lansbergen, Marieke M; van der Flier, Wiesje M; van Berckel, Bart N M; Scheltens, Philip

    2016-11-01

    Alzheimer's disease is associated with early synaptic loss. Specific nutrients are known to be rate limiting for synapse formation. Studies have shown that administering specific nutrients may improve memory function, possibly by increasing synapse formation. This Dutch study explores the Effect of a specific Nutritional Intervention on cerebral Glucose Metabolism in early Alzheimer's disease (NL-ENIGMA, Dutch Trial Register NTR4718, http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4718). The NL-ENIGMA study is designed to test whether the specific multinutrient combination Fortasyn Connect present in the medical food Souvenaid influences cerebral glucose metabolism as a marker for improved synapse function. This study is a double-blind, randomized controlled parallel-group single-center trial. Forty drug-naive patients with mild cognitive impairment or mild dementia with evidence of amyloid deposition are 1:1 randomized to receive either the multinutrient combination or placebo once daily. Main exploratory outcome parameters include absolute quantitative positron emission tomography with 18 F-fluorodeoxyglucose (including arterial sampling) and standard uptake value ratios normalized for the cerebellum or pons after 24 weeks. We expect the NL-ENIGMA study to provide further insight in the potential of this multinutrient combination to improve synapse function.

  20. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  1. ARTS III/Parallel Processor Design Study

    DOT National Transportation Integrated Search

    1975-04-01

    It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...

  2. The efficacy and safety of Baoji Tablets for treating common cold with summer-heat and dampness syndrome: study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Despite the high incidence and the economic impact of the common cold, there are still no effective therapeutic options available. Although traditional Chinese medicine (TCM) is widely used in China to treat the common cold, there is still a lack of high-quality clinical trials. This article sets forth the protocol for a high-quality trial of a new TCM drug, Baoji Tablets, which is designed to treat the common cold with summer-heat and dampness syndrome (CCSDS). The trial is evaluating both the efficacy and safety of Baoji Tablets. Methods/design This study is designed as a multicenter, phase II, parallel-group, double-blind, double-dummy, randomized and placebo-controlled trial. A total of 288 patients will be recruited from four centers. The new tablets group are administered Baoji Tablets 0.9 g and dummy Baoji Pills 3.7 g. The old pills group are administered dummy Baoji Tablets 0.9 g and Baoji Pills 3.7 g. The placebo control group are administered dummy Baoji Tablets 0.9 g and dummy Baoji Pills 3.7 g. All drugs are taken three times daily for 3 days. The primary outcome is the duration of all symptoms. Secondary outcomes include the duration of primary and secondary symptoms, changes in primary and secondary symptom scores and cumulative symptom score at day 4, as well as an evaluation of treatment efficacy. Discussion This is the first multicenter, double-blind, double-dummy, randomized and placebo-controlled trial designated to treat CCSDS in an adult population from China. It will establish the basis for a scientific and objective assessment of the efficacy and safety of Baoji Tablets for treating CCSDS, and provide evidence for a phase III clinical trial. Trial registration This study is registered with the Chinese Clinical Trial Registry. The registration number is ChiCTR-TRC-13003197. PMID:24359521

  3. Predictors of short- and long-term adherence with a Mediterranean-type diet intervention: the PREDIMED randomized trial.

    PubMed

    Downer, Mary Kathryn; Gea, Alfredo; Stampfer, Meir; Sánchez-Tainta, Ana; Corella, Dolores; Salas-Salvadó, Jordi; Ros, Emilio; Estruch, Ramón; Fitó, Montserrat; Gómez-Gracia, Enrique; Arós, Fernando; Fiol, Miquel; De-la-Corte, Francisco Jose Garcia; Serra-Majem, Lluís; Pinto, Xavier; Basora, Josep; Sorlí, José V; Vinyoles, Ernest; Zazpe, Itziar; Martínez-González, Miguel-Ángel

    2016-06-14

    Dietary intervention success requires strong participant adherence, but very few studies have examined factors related to both short-term and long-term adherence. A better understanding of predictors of adherence is necessary to improve the design and execution of dietary intervention trials. This study was designed to identify participant characteristics at baseline and study features that predict short-term and long-term adherence with interventions promoting the Mediterranean-type diet (MedDiet) in the PREvención con DIeta MEDiterránea (PREDIMED) randomized trial. Analyses included men and women living in Spain aged 55-80 at high risk for cardiovascular disease. Participants were randomized to the MedDiet supplemented with either complementary extra-virgin olive oil (EVOO) or tree nuts. The control group and participants with insufficient information on adherence were excluded. PREDIMED began in 2003 and ended in 2010. Investigators assessed covariates at baseline and dietary information was updated yearly throughout follow-up. Adherence was measured with a validated 14-point Mediterranean-type diet adherence score. Logistic regression was used to examine associations between baseline characteristics and adherence at one and four years of follow-up. Participants were randomized to the MedDiet supplemented with EVOO (n = 2,543; 1,962 after exclusions) or tree nuts (n = 2,454; 2,236 after exclusions). A higher number of cardiovascular risk factors, larger waist circumference, lower physical activity levels, lower total energy intake, poorer baseline adherence to the 14-point adherence score, and allocation to MedDiet + EVOO each independently predicted poorer adherence. Participants from PREDIMED recruiting centers with a higher total workload (measured as total number of persons-years of follow-up) achieved better adherence. No adverse events or side effects were reported. To maximize dietary adherence in dietary interventions, additional efforts to promote adherence should be used for participants with lower baseline adherence to the intended diet and poorer health status. The design of multicenter nutrition trials should prioritize few large centers with more participants in each, rather than many small centers. This study was registered at controlled-trials.com (http://www.controlled-trials. com/ISRCTN35739639). International Standard Randomized Controlled Trial Number (ISRCTN): 35739639. Registration date: 5 October 2005. parallel randomized trial.

  4. A randomized controlled trial of intranasal ketamine in migraine with prolonged aura.

    PubMed

    Afridi, Shazia K; Giffin, Nicola J; Kaube, Holger; Goadsby, Peter J

    2013-02-12

    The aim of our study was to test the hypothesis that ketamine would affect aura in a randomized controlled double-blind trial, and thus to provide direct evidence for the role of glutamatergic transmission in human aura. We performed a double-blinded, randomized parallel-group controlled study investigating the effect of 25 mg intranasal ketamine on migraine with prolonged aura in 30 migraineurs using 2 mg intranasal midazolam as an active control. Each subject recorded data from 3 episodes of migraine. Eighteen subjects completed the study. Ketamine reduced the severity (p = 0.032) but not duration of aura in this group, whereas midazolam had no effect. These data provide translational evidence for the potential importance of glutamatergic mechanisms in migraine aura and offer a pharmacologic parallel between animal experimental work on cortical spreading depression and the clinical problem. This study provides class III evidence that intranasal ketamine is effective in reducing aura severity in patients with migraine with prolonged aura.

  5. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  6. Multifamily psychoeducation for improvement of mental health among relatives of patients with major depressive disorder lasting more than one year: study protocol for a randomized controlled trial.

    PubMed

    Katsuki, Fujika; Takeuchi, Hiroshi; Watanabe, Norio; Shiraishi, Nao; Maeda, Tohru; Kubota, Yosuke; Suzuki, Masako; Yamada, Atsurou; Akechi, Tatsuo

    2014-08-12

    Major depressive disorder (MDD) is a long-lasting disorder with frequent relapses that have significant effects on the patient's family. Family psychoeducation is recognized as part of the optimal treatment for patients with psychotic disorder. A previous randomized controlled trial has found that family psychoeducation is effective in enhancing the treatment of MDD. Although MDD can easily become a chronic illness, there has been no intervention study on the families of patients with chronic depression. In the present study, we design a randomized controlled trial to examine the effectiveness of family psychoeducation in improving the mental health of relatives of patients with MDD lasting more than one year. Participants are patients with MDD lasting more than one year and their relatives. Individually randomized, parallel-group trial design will be employed. Participants will be allocated to one of two treatment conditions: relatives will receive (a) family psychoeducation (four, two-hour biweekly multifamily psychoeducation sessions) plus treatment-as-usual for the patient (consultation by physicians), or (b) counseling for the family (one counseling session from a nurse) plus treatment-as-usual for the patient. The primary outcome measure will be relatives' mental health as measured by K6 that was developed to screen for DSM-IV depressive and anxiety disorder. Additionally, the severity of depressive symptoms in patients measured by the Beck Depression Inventory-II (BDI-II) scale will be assessed. Data from the intention-to-treat sample will be analyzed 16 weeks after randomization. This is the first study to evaluate the effectiveness of family psychoeducation for relatives of patients with MDD lasting more than one year. If this type of intervention is effective, it could be a new method of rehabilitation for patients with MDD lasting more than one year. Clinical Trials.gov NCT01734291 (registration date: 18 October 2012).

  7. The effectiveness of an intervention to reduce alcohol‐related violence in premises licensed for the sale and on‐site consumption of alcohol: a randomized controlled trial

    PubMed Central

    Alam, M. Fasihul; Heikkinen, Marjukka; Hood, Kerenza; Huang, Chao; Moore, Laurence; Murphy, Simon; Playle, Rebecca; Shepherd, Jonathan; Shovelton, Claire; Sivarajasingam, Vaseekaran; Williams, Anne

    2017-01-01

    Abstract Background and Aims Premises licensed for the sale and consumption of alcohol can contribute to levels of assault‐related injury through poor operational practices that, if addressed, could reduce violence. We tested the real‐world effectiveness of an intervention designed to change premises operation, whether any intervention effect changed over time, and the effect of intervention dose. Design A parallel randomized controlled trial with the unit of allocation and outcomes measured at the level of individual premises. Setting All premises (public houses, nightclubs or hotels with a public bar) in Wales, UK. Participants A randomly selected subsample (n = 600) of eligible premises (that had one or more violent incidents recorded in police‐recorded crime data; n = 837) were randomized into control and intervention groups. Intervention and comparator Intervention premises were audited by Environmental Health Practitioners who identified risks for violence and provided feedback by varying dose (informal, through written advice, follow‐up visits) on how risks could be addressed. Control premises received usual practice. Measurements Police data were used to derive a binary variable describing whether, on each day premises were open, one or more violent incidents were evident over a 455‐day period following randomization. Findings Due to premises being unavailable at the time of intervention delivery 208 received the intervention and 245 were subject to usual practice in an intention‐to‐treat analysis. The intervention was associated with an increase in police recorded violence compared to normal practice (hazard ratio = 1.34, 95% confidence interval = 1.20–1.51). Exploratory analyses suggested that reduced violence was associated with greater intervention dose (follow‐up visits). Conclusion An Environmental Health Practitioner‐led intervention in premises licensed for the sale and on‐site consumption of alcohol resulted in an increase in police recorded violence. PMID:28543914

  8. Methodology Series Module 4: Clinical Trials.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an "open trial." However, many of the trials are not open - they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India.

  9. Methodology Series Module 4: Clinical Trials

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an “open trial.” However, many of the trials are not open – they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India. PMID:27512184

  10. Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary.

    PubMed

    Li, J; Guo, L-X; Zeng, H; Han, X-B

    2009-06-01

    A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

  11. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  12. Xyce parallel electronic simulator : users' guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less

  13. Gratings and Random Reflectors for Near-Infrared PIN Diodes

    NASA Technical Reports Server (NTRS)

    Gunapala, Sarath; Bandara, Sumith; Liu, John; Ting, David

    2007-01-01

    Crossed diffraction gratings and random reflectors have been proposed as means to increase the quantum efficiencies of InGaAs/InP positive/intrinsic/ negative (PIN) diodes designed to operate as near-infrared photodetectors. The proposal is meant especially to apply to focal-plane imaging arrays of such photodetectors to be used for near-infrared imaging. A further increase in quantum efficiency near the short-wavelength limit of the near-infrared spectrum of such a photodetector array could be effected by removing the InP substrate of the array. The use of crossed diffraction gratings and random reflectors as optical devices for increasing the quantum efficiencies of quantum-well infrared photodetectors (QWIPs) was discussed in several prior NASA Tech Briefs articles. While the optical effects of crossed gratings and random reflectors as applied to PIN photodiodes would be similar to those of crossed gratings and random reflectors as applied to QWIPs, the physical mechanisms by which these optical effects would enhance efficiency differ between the PIN-photodiode and QWIP cases: In a QWIP, the multiple-quantum-well layers are typically oriented parallel to the focal plane and therefore perpendicular or nearly perpendicular to the direction of incidence of infrared light. By virtue of the applicable quantum selection rules, light polarized parallel to the focal plane (as normally incident light is) cannot excite charge carriers and, hence, cannot be detected. A pair of crossed gratings or a random reflector scatters normally or nearly normally incident light so that a significant portion of it attains a component of polarization normal to the focal plane and, hence, can excite charge carriers. A pair of crossed gratings or a random reflector on a PIN photodiode would also scatter light into directions away from the perpendicular to the focal plane. However, in this case, the reason for redirecting light away from the perpendicular is to increase the length of the optical path through the detector to increase the probability of absorption of photons and thereby increase the resulting excitation of charge carriers. A pair of crossed gratings or a random reflector according to the proposal would be fabricated as an integral part of photodetector structure on the face opposite the focal plane (see figure). In the presence of crossed gratings, light would make four passes through the device before departing. In the presence of a random reflector, a significant portion of the light would make more than four passes: After each bounce, light would be scattered at a different random angle, and would have a chance to escape only when it was reflected, relative to the normal, at an angle less than the critical angle for total internal reflection. Given the indices of refraction of the photodiode materials, this angle would be about 17 . This amounts to a very narrow cone for escape of trapped light.

  14. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  15. Development of a Novel Six-Month Nutrition Intervention for a Randomized Trial in Older Men with Mobility Limitations.

    PubMed

    Apovian, C M; Singer, M R; Campbell, W W; Bhasin, S; McCarthy, A C; Shah, M; Basaria, S; Moore, L L

    2017-01-01

    Nutrition impacts the development of sarcopenia and protein intake is an important modulator of skeletal muscle mass loss in older people. The Optimizing Protein Intake in Older Men with Mobility Limitation (OPTIMEN) Trial was designed to assess the independent and combined effects of higher protein intake and a promyogenic agent, testosterone, on lean body mass, muscle strength and physical function in older men with mobility disability. The purpose of this paper is to describe the experimental design and nutrition intervention, including techniques used by research dietitians to develop and deliver energy and protein-specific meals to the homes of community-dwelling participants. Strategies to enhance long-term dietary compliance are detailed. Randomized, double-blind, placebo-controlled six-month intervention trial. Participants were recruited from Boston MA USA and surrounding communities. Older men who were mobility-limited (Short Physical Performance Battery (SPPB) 3-10) and consuming less protein (<0.83 g/kg/day) were recruited for this study. Here we report the successful implementation of a double-blind, placebo-controlled, parallel group, randomized controlled trial with a 6-month intervention period among community-living men, age 65 years and older with a mobility limitation. A controlled feeding plan was used to deliver required energy intakes and prescribed protein quantities of 0.8 or 1.3 grams/kilogram/day (g/kg/d) in three meals plus snacks and supplements. A 2x2 factorial design was used to assess the effects of protein level alone and in combination with testosterone (vs. placebo) on changes in lean body mass (primary outcome), muscle strength, and physical function. A total of 154 men met the eligibility criteria; 112 completed a 2-week run-in period designed to evaluate compliance with the nutrition intervention. Of these, 92 subjects met compliance eligibility criteria and agreed to be randomized; 85% completed the full trial. The study successfully delivered three meals per day to subjects, with a high degree of compliance and subject satisfaction. Overall self-reported compliance rates were 80% and 93% for the meals and supplements, respectively. Details of compliance strategies are discussed. This community-based study design may serve as a model for longer-term nutritional interventions requiring monitoring of dietary compliance in a home-based feeding and supplementation trial.

  16. Effectiveness of a vegetable dental chew on periodontal disease parameters in toy breed dogs.

    PubMed

    Clarke, D E; Kelman, M; Perkins, N

    2011-01-01

    Sixteen toy breed dogs completed a parallel, 70-day two-period, cross-over design clinical study to determine the effect of a vegetable dental chew on gingivitis, halitosis, plaque, and calculus accumulations. The dogs were randomly assigned into two groups. During one study period the dogs were fed a non-dental dry diet only and during the second study period were fed the same dry diet supplemented by the daily addition of a vegetable dental chew. Daily administration of the dental chew was shown to reduce halitosis, as well as, significantly reduce gingivitis, plaque and calculus accumulation and therefore may play a significant role in the improvement of canine oral health over the long-term.

  17. Effects of Study Design and Allocation on participant behaviour--ESDA: study protocol for a randomized controlled trial.

    PubMed

    Kypri, Kypros; McCambridge, Jim; Wilson, Amanda; Attia, John; Sheeran, Paschal; Bowe, Steve; Vater, Tina

    2011-02-14

    What study participants think about the nature of a study has been hypothesised to affect subsequent behaviour and to potentially bias study findings. In this trial we examine the impact of awareness of study design and allocation on participant drinking behaviour. A three-arm parallel group randomised controlled trial design will be used. All recruitment, screening, randomisation, and follow-up will be conducted on-line among university students. Participants who indicate a hazardous level of alcohol consumption will be randomly assigned to one of three groups. Group A will be informed their drinking will be assessed at baseline and again in one month (as in a cohort study design). Group B will be told the study is an intervention trial and they are in the control group. Group C will be told the study is an intervention trial and they are in the intervention group. All will receive exactly the same brief educational material to read. After one month, alcohol intake for the past 4 weeks will be assessed. The experimental manipulations address subtle and previously unexplored ways in which participant behaviour may be unwittingly influenced by standard practice in trials. Given the necessity of relying on self-reported outcome, it will not be possible to distinguish true behaviour change from reporting artefact. This does not matter in the present study, as any effects of awareness of study design or allocation involve bias that is not well understood. There has been little research on awareness effects, and our outcomes will provide an indication of the possible value of further studies of this type and inform hypothesis generation. Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12610000846022.

  18. Effectiveness of supervised oral health maintenance in hearing impaired and mute children- A parallel randomized controlled trial

    PubMed Central

    Pareek, Sonia; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Atri, Mansi; Singh, Kushpal

    2015-01-01

    Context: Individuals with special needs may have great limitations in oral hygiene performance due to their potential motor, sensory, and intellectual disabilities. Thus, oral health care utilization is low among the disabled people. Hearing disorders affect the general behavior and impair the level of social functioning. Objectives: The present study was conducted to assess the dental health outcomes following supervised tooth brushing among institutionalized hearing impaired and mute children in Jaipur, Rajasthan. Materials and Methods: The study followed a single-blind, parallel, and randomized controlled design. A total of 315 students were divided into three groups of 105 children each. Group A included resident students, who underwent supervised tooth brushing under the supervision of their parents. The non-resident students were further divided into two groups: Group B and Group C. Group B children were under the supervision of a caregiver and Group C children were under the supervision of both investigator and caregiver. Results: There was an average reduction in plaque score during the subsequent second follow-up conducted 3 weeks after the start of the study and in the final follow-up conducted at 6 weeks. There was also a marked reduction in the gingival index scores in all the three groups. Conclusion: The program of teacher and parent supervised toothbrushing with fluoride toothpaste can be safely targeted to socially deprived communities and can enable a significant reduction in plaque and gingival scores. Thus, an important principle of oral health education is the active involvement of parents and caregivers. PMID:26236676

  19. Rationale and design of the HepZero study: a prospective, multicenter, international, open, randomized, controlled clinical study with parallel groups comparing heparin-free dialysis with heparin-coated dialysis membrane (Evodial) versus standard care: study protocol for a randomized controlled trial.

    PubMed

    Rossignol, Patrick; Dorval, Marc; Fay, Renaud; Ros, Joan Fort; Loughraieb, Nathalie; Moureau, Frédérique; Laville, Maurice

    2013-06-01

    Anticoagulation for chronic dialysis patients with contraindications to heparin administration is challenging. Current guidelines state that in patients with increased bleeding risks, strategies that can induce systemic anticoagulation should be avoided. Heparin-free dialysis using intermittent saline flushes is widely adopted as the method of choice for patients at risk of bleeding, although on-line blood predilution may also be used. A new dialyzer, Evodial (Gambro, Lund, Sweden), is grafted with unfractionated heparin during the manufacturing process and may allow safe and efficient heparin-free hemodialysis sessions. In the present trial, Evodial was compared to standard care with either saline flushes or blood predilution. The HepZero study is the first international (seven countries), multicenter (10 centers), randomized, controlled, open-label, non-inferiority (and if applicable subsequently, superiority) trial with two parallel groups, comprising 252 end-stage renal disease patients treated by maintenance hemodialysis for at least 3 months and requiring heparin-free dialysis treatments. Patients will be treated during a maximum of three heparin-free dialysis treatments with either saline flushes or blood predilution (control group), or Evodial. The first heparin-free dialysis treatment will be considered successful when there is: no complete occlusion of air traps or dialyzer rendering dialysis impossible; no additional saline flushes to prevent clotting; no change of dialyzer or blood lines because of clotting; and no premature termination (early rinse-back) because of clotting.The primary objectives of the study are to determine the effectiveness of the Evodial dialyzer, compared with standard care in terms of successful treatments during the first heparin-free dialysis. If the non-inferiority of Evodial is demonstrated then the superiority of Evodial over standard care will be tested. The HepZero study results may have major clinical implications for patient care. ClinicalTrials.gov NCT01318486.

  20. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Zachary M.; Wendelberger, Joanne Roth

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less

  1. Open-Label, Randomized, Parallel-Group Controlled Clinical Trial of Massage for Treatment of Depression in HIV-Infected Subjects

    PubMed Central

    Gertsik, Lev; Favreau, Joya T.; Smith, Shawnee I.; Mirocha, James M.; Rao, Uma; Daar, Eric S.

    2013-01-01

    Abstract Objectives The study objectives were to determine whether massage therapy reduces symptoms of depression in subjects with human immunodeficiency virus (HIV) disease. Design Subjects were randomized non-blinded into one of three parallel groups to receive Swedish massage or to one of two control groups, touch or no intervention for eight weeks. Settings/location The study was conducted at the Department of Psychiatry and Behavioral Neurosciences at Cedars-Sinai Medical Center in Los Angeles, California, which provided primary clinical care in an institutional setting. Subjects Study inclusion required being at least 16 years of age, HIV-seropositive, with a diagnosis of major depressive disorder. Subjects had to be on a stable neuropsychiatric, analgesic, and antiretroviral regimen for >30 days with no plans to modify therapy for the duration of the study. Approximately 40% of the subjects were currently taking antidepressants. All subjects were medically stable. Fifty-four (54) subjects were randomized, 50 completed at least 1 week (intent-to-treat; ITT), and 37 completed the study (completers). Interventions Swedish massage and touch subjects visited the massage therapist for 1 hour twice per week. The touch group had a massage therapist place both hands on the subject with slight pressure, but no massage, in a uniform distribution in the same pattern used for the massage subjects. Outcome measures The primary outcome measure was the Hamilton Rating Scale for Depression score, with the secondary outcome measure being the Beck Depression Inventory. Results For both the ITT and completers analyses, massage significantly reduced the severity of depression beginning at week 4 (p≤0.04) and continuing at weeks 6 (p≤0.03) and 8 (p≤0.005) compared to no intervention and/or touch. Conclusions The results indicate that massage therapy can reduce symptoms of depression in subjects with HIV disease. The durability of the response, optimal “dose” of massage, and mechanisms by which massage exerts its antidepressant effects remain to be determined. PMID:23098696

  2. Safety and Efficacy of ABT-089 in Pediatric Attention-Deficit/Hyperactivity Disorder: Results from Two Randomized Placebo-Controlled Clinical Trials

    ERIC Educational Resources Information Center

    Wilens, Timothy E.; Gault, Laura M.; Childress, Ann; Kratochvil, Christopher J.; Bensman, Lindsey; Hall, Coleen M.; Olson, Evelyn; Robieson, Weining Z.; Garimella, Tushar S.; Abi-Saab, Walid M.; Apostol, George; Saltarelli, Mario D.

    2011-01-01

    Objective: To assess the safety and efficacy of ABT-089, a novel alpha[subscript 4]beta[subscript 2] neuronal nicotinic receptor partial agonist, vs. placebo in children with attention-deficit/hyperactivity disorder (ADHD). Method: Two multicenter, randomized, double-blind, placebo-controlled, parallel-group studies of children 6 through 12 years…

  3. The Acute Effect of Methylphenidate in Brazilian Male Children and Adolescents with ADHD: A Randomized Clinical Trial

    ERIC Educational Resources Information Center

    Szobot, C. M.; Ketzer, C.; Parente, M. A.; Biederman, J.; Rohde, L. A.

    2004-01-01

    Objective: To evaluate the acute efficacy of methylphenidate (MPH) in Brazilian male children and adolescents with ADHD. Method: In a 4-day, double-blind, placebo-controlled, randomized, fix dose escalating, parallel-group trial, 36 ADHD children and adolescents were allocated to two groups: MPH (n = 19) and placebo (n = 17). Participants were…

  4. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial.

    PubMed

    Isiordia-Espinoza, M-A; Pozos-Guillen, A; Martinez-Rider, R; Perez-Urizar, J

    2016-09-01

    Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. According to the VAS and UAC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery.

  5. Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes

    DOEpatents

    Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey

    2010-10-05

    In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.

  6. Design of the evolution of management strategies of heart failure patients with implantable defibrillators (EVOLVO) study to assess the ability of remote monitoring to treat and triage patients more effectively

    PubMed Central

    Marzegalli, Maurizio; Landolina, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Pappone, Alessia; Guenzati, Giuseppe; Campana, Carlo; Frigerio, Maria; Parati, Gianfranco; Curnis, Antonio; Colangelo, Irene; Valsecchi, Sergio

    2009-01-01

    Background Heart failure patients with implantable defibrillators (ICD) frequently visit the clinic for routine device monitoring. Moreover, in the case of clinical events, such as ICD shocks or alert notifications for changes in cardiac status or safety issues, they often visit the emergency department or the clinic for an unscheduled visit. These planned and unplanned visits place a great burden on healthcare providers. Internet-based remote device interrogation systems, which give physicians remote access to patients' data, are being proposed in order to reduce routine and interim visits and to detect and notify alert conditions earlier. Methods The EVOLVO study is a prospective, randomized, parallel, unblinded, multicenter clinical trial designed to compare remote ICD management with the current standard of care, in order to assess its ability to treat and triage patients more effectively. Two-hundred patients implanted with wireless-transmission-enabled ICD will be enrolled and randomized to receive either the Medtronic CareLink® monitor for remote transmission or the conventional method of in-person evaluations. The purpose of this manuscript is to describe the design of the trial. The results, which are to be presented separately, will characterize healthcare utilizations as a result of ICD follow-up by means of remote monitoring instead of conventional in-person evaluations. Trial registration ClinicalTrials.gov: NCT00873899 PMID:19538734

  7. Fringe Capacitance of a Parallel-Plate Capacitor.

    ERIC Educational Resources Information Center

    Hale, D. P.

    1978-01-01

    Describes an experiment designed to measure the forces between charged parallel plates, and determines the relationship among the effective electrode area, the measured capacitance values, and the electrode spacing of a parallel plate capacitor. (GA)

  8. Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School

    NASA Astrophysics Data System (ADS)

    Lafage, Vincent

    2017-11-01

    Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.

  9. School-based mindfulness intervention for stress reduction in adolescents: Design and methodology of an open-label, parallel group, randomized controlled trial.

    PubMed

    Johnstone, Jeanette M; Roake, Chelsea; Sheikh, Ifrah; Mole, Ashlie; Nigg, Joel T; Oken, Barry

    2016-12-15

    Adolescents are in a high-risk period developmentally, in terms of susceptibility to stress. A mindfulness intervention represents a potentially useful strategy for developing cognitive and emotion regulation skills associated with successful stress coping. Mindfulness strategies have been used successfully for emotional coping in adults, but are not as well studied in youth. This article details a novel proposal for the design of an 8-week randomized study to evaluate a high school-based mindfulness curriculum delivered as part of a two semester health class. A wellness education intervention is proposed as an active control, along with a waitlist control condition. All students enrolled in a sophomore (10 th grade) health class at a private suburban high school will be invited to participate ( n = 300). Pre-test assessments will be obtained by youth report, parent ratings, and on-site behavioral testing. The assessments will evaluate baseline stress, mood, emotional coping, controlled attention, and working memory. Participants, divided into 13 classrooms, will be randomized into one of three conditions, by classroom: A mindfulness intervention, an active control (wellness education), and a passive control (waitlist). Waitlisted participants will receive one of the interventions in the following term. Intervention groups will meet weekly for 8 weeks during regularly scheduled health classes. Immediate post-tests will be conducted, followed by a 60-day post-test. It is hypothesized that the mindfulness intervention will outperform the other conditions with regard to the adolescents' mood, attention and response to stress.

  10. Efficacy and safety of combined treatment of miniscalpel acupuncture and non-steroidal anti-inflammatory drugs: an assessor-blinded randomized controlled pilot study.

    PubMed

    Jun, Seungah; Lee, Jung Hee; Gong, Han Mi; Chung, Yeon-Joong; Kim, Ju-Ran; Park, Chung A; Choi, Seong Hun; Lee, Geon-Mok; Lee, Hyun-Jong; Kim, Jae Soo

    2018-01-12

    Chronic neck pain is a common musculoskeletal disease during the lifespan of an individual. With an increase in dependence on computer technology, the prevalence of chronic neck pain is expected to rise and this can lead to socioeconomic problems. We have designed the current pilot study to evaluate the efficacy and safety of miniscalpel acupuncture treatment combined with non-steroidal anti-inflammatory drugs (NSAIDs) in patients with chronic neck pain. This seven-week clinical trial has been designed as an assessor-blinded, randomized controlled trial with three parallel arms. Thirty-six patients will be recruited and randomly allocated to three treatment groups: miniscalpel acupuncture treatment; NSAIDs; and miniscalpel acupuncture treatment combined with NSAIDs. Patients in the miniscalpel acupuncture and combined treatment groups will receive three sessions of miniscalpel acupuncture over a three-week period. Patients in the NSAIDs and combined treatment groups will receive zaltoprofen (one oral tablet, three times a day for three weeks). Primary and secondary outcomes will be measured at weeks 0 (baseline), 1, 2, 3 (primary end point), and 7 (four weeks after treatment completion) using the visual analogue scale and the Neck Disability Index, EuroQol 5-dimension questionnaire, and Patients' Global Impression of Change scale, respectively. Adverse events will also be recorded. This pilot study will provide a basic foundation for a future large-scale trial as well as information about the feasibility of miniscalpel acupuncture treatment combined with NSAIDs for chronic neck pain. Korean Clinical Research Information Service registry, KCT0002258 . Registered on 9 March 2017.

  11. Protocol: the effect of 12 weeks of Tai Chi practice on anxiety in healthy but stressed people compared to exercise and wait-list comparison groups: a randomized controlled trial.

    PubMed

    Zheng, Shuai; Lal, Sara; Meier, Peter; Sibbritt, David; Zaslawski, Chris

    2014-06-01

    Stress is a major problem in today's fast-paced society and can lead to serious psychosomatic complications. The ancient Chinese mind-body exercise of Tai Chi may provide an alternative and self-sustaining option to pharmaceutical medication for stressed individuals to improve their coping mechanisms. The protocol of this study is designed to evaluate whether Tai Chi practice is equivalent to standard exercise and whether the Tai Chi group is superior to a wait-list control group in improving stress coping levels. This study is a 6-week, three-arm, parallel, randomized, clinical trial designed to evaluate Tai Chi practice against standard exercise and a Tai Chi group against a nonactive control group over a period of 6 weeks with a 6-week follow-up. A total of 72 healthy adult participants (aged 18-60 years) who are either Tai Chi naïve or have not practiced Tai Chi in the past 12 months will be randomized into a Tai Chi group (n = 24), an exercise group (n = 24) or a wait-list group (n = 24). The primary outcome measure will be the State Trait Anxiety Inventory with secondary outcome measures being the Perceived Stress Scale 14, heart rate variability, blood pressure, Short Form 36 and a visual analog scale. The protocol is reported using the appropriate Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) items. Copyright © 2014. Published by Elsevier B.V.

  12. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  13. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  14. The effect of earthquake on architecture geometry with non-parallel system irregularity configuration

    NASA Astrophysics Data System (ADS)

    Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri

    2017-12-01

    Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a building with building parts containing irregular configuration of non-parallel system, make it more rigid by forming a triangle module, and use the formula.A good collaboration is needed between architects and structural experts in creating earthquake architecture.

  15. A CS1 pedagogical approach to parallel thinking

    NASA Astrophysics Data System (ADS)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.

  16. Parallel computing works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less

  17. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  18. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less

  19. Rationale and design of the Multidisciplinary Approach to Novel Therapies in Cardiology Oncology Research Trial (MANTICORE 101--Breast): a randomized, placebo-controlled trial to determine if conventional heart failure pharmacotherapy can prevent trastuzumab-mediated left ventricular remodeling among patients with HER2+ early breast cancer using cardiac MRI.

    PubMed

    Pituskin, Edith; Haykowsky, Mark; Mackey, John R; Thompson, Richard B; Ezekowitz, Justin; Koshman, Sheri; Oudit, Gavin; Chow, Kelvin; Pagano, Joseph J; Paterson, Ian

    2011-07-27

    MANTICORE 101 - Breast (Multidisciplinary Approach to Novel Therapies in Cardiology Oncology Research) is a randomized trial to determine if conventional heart failure pharmacotherapy (angiotensin converting enzyme inhibitor or beta-blocker) can prevent trastuzumab-mediated left ventricular remodeling, measured with cardiac MRI, among patients with HER2+ early breast cancer. One hundred and fifty-nine patients with histologically confirmed HER2+ breast cancer will be enrolled in a parallel 3-arm, randomized, placebo controlled, double-blind design. After baseline assessments, participants will be randomized in a 1:1:1 ratio to an angiotensin-converting enzyme inhibitor (perindopril), beta-blocker (bisoprolol), or placebo. Participants will receive drug or placebo for 1 year beginning 7 days before trastuzumab therapy. Dosages for all groups will be systematically up-titrated, as tolerated, at 1 week intervals for a total of 3 weeks. The primary objective of this randomized clinical trial is to determine if conventional heart failure pharmacotherapy can prevent trastuzumab-mediated left ventricular remodeling among patients with HER2+ early breast cancer, as measured by 12 month change in left ventricular end-diastolic volume using cardiac MRI. Secondary objectives include 1) determine the evolution of left ventricular remodeling on cardiac MRI in patients with HER2+ early breast cancer, 2) understand the mechanism of trastuzumab mediated cardiac toxicity by assessing for the presence of myocardial injury and apoptosis on serum biomarkers and cardiac MRI, and 3) correlate cardiac biomarkers of myocyte injury and extra-cellular matrix remodeling with left ventricular remodeling on cardiac MRI in patients with HER2+ early breast cancer. Cardiac toxicity as a result of cancer therapies is now recognized as a significant health problem of increasing prevalence. To our knowledge, MANTICORE will be the first randomized trial testing proven heart failure pharmacotherapy in the prevention of trastuzumab-mediated cardiotoxicity. We expect the findings of this trial to provide important evidence in the development of guidelines for preventive therapy. ClinicalTrials.gov: NCT01016886.

  20. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  1. Differential Draining of Parallel-Fed Propellant Tanks in Morpheus and Apollo Flight

    NASA Technical Reports Server (NTRS)

    Hurlbert, Eric; Guardado, Hector; Hernandez, Humberto; Desai, Pooja

    2015-01-01

    Parallel-fed propellant tanks are an advantageous configuration for many spacecraft. Parallel-fed tanks allow the center of gravity (cg) to be maintained over the engine(s), as opposed to serial-fed propellant tanks which result in a cg shift as propellants are drained from tank one tank first opposite another. Parallel-fed tanks also allow for tank isolation if that is needed. Parallel tanks and feed systems have been used in several past vehicles including the Apollo Lunar Module. The design of the feedsystem connecting the parallel tank is critical to maintain balance in the propellant tanks. The design must account for and minimize the effect of manufacturing variations that could cause delta-p or mass flowrate differences, which would lead to propellant imbalance. Other sources of differential draining will be discussed. Fortunately, physics provides some self-correcting behaviors that tend to equalize any initial imbalance. The question concerning whether or not active control of propellant in each tank is required or can be avoided or not is also important to answer. In order to provide data on parallel-fed tanks and differential draining in flight for cryogenic propellants (as well as any other fluid), a vertical test bed (flying lander) for terrestrial use was employed. The Morpheus vertical test bed is a parallel-fed propellant tank system that uses passive design to keep the propellant tanks balanced. The system is operated in blow down. The Morpheus vehicle was instrumented with a capacitance level sensor in each propellant tank in order to measure the draining of propellants in over 34 tethered and 12 free flights. Morpheus did experience an approximately 20 lb/m imbalance in one pair of tanks. The cause of this imbalance will be discussed. This paper discusses the analysis, design, flight simulation vehicle dynamic modeling, and flight test of the Morpheus parallel-fed propellant. The Apollo LEM data is also examined in this summary report of the flight data.

  2. Effect of the emotional freedom technique on perceived stress, quality of life, and cortisol salivary levels in tension-type headache sufferers: a randomized controlled trial.

    PubMed

    Bougea, Anastasia M; Spandideas, Nick; Alexopoulos, Evangelos C; Thomaides, Thomas; Chrousos, George P; Darviri, Christina

    2013-01-01

    To evaluate the short-term effects of the emotional freedom technique (EFT) on tension-type headache (TTH) sufferers. We used a parallel-group design, with participants randomly assigned to the emotional freedom intervention (n = 19) or a control arm (standard care n = 16). The study was conducted at the outpatient Headache Clinic at the Korgialenio Benakio Hospital of Athens. Thirty-five patients meeting criteria for frequent TTH according to International Headache Society guidelines were enrolled. Participants were instructed to use the EFT method twice a day for two months. Study measures included the Perceived Stress Scale, the Multidimensional Health Locus of Control Scale, and the Short-Form questionnaire-36. Salivary cortisol levels and the frequency and intensity of headache episodes were also assessed. Within the treatment arm, perceived stress, scores for all Short-Form questionnaire-36 subscales, and the frequency and intensity of the headache episodes were all significantly reduced. No differences in cortisol levels were found in any group before and after the intervention. EFT was reported to benefit patients with TTH. This randomized controlled trial shows promising results for not only the frequency and severity of headaches but also other lifestyle parameters. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A randomized study comparing outcomes of stapled and hand-sutured anastomoses in patients undergoing open gastrointestinal surgery.

    PubMed

    Chandramohan, S M; Gajbhiye, Raj Narenda; Agwarwal, Anil; Creedon, Erin; Schwiers, Michael L; Waggoner, Jason R; Tatla, Daljit

    2013-08-01

    Although stapling is an alternative to hand-suturing in gastrointestinal surgery, recent trials specifically designed to evaluate differences between the two in surgery time, anastomosis time, and return to bowel activity are lacking. This trial compared the outcomes of the two in subjects undergoing open gastrointestinal surgery. Adult subjects undergoing emergency or elective surgery requiring a single gastric, small, or large bowel anastomosis were enrolled into this open-label, prospective, randomized, interventional, parallel, multicenter, controlled trial. Randomization was assigned in a 1:1 ratio between the hand-sutured group (n = 138) and the stapled group (n = 142). Anastomosis time, surgery time, and time to bowel activity were collected and compared as primary endpoints. A total of 280 subjects were enrolled from April 2009 to September 2010. Only the time of anastomosis was significantly different between the two arms: 17.6 ± 1.90 min (stapled) and 20.6 ± 1.90 min (hand-sutured). This difference was deemed not clinically or economically meaningful. Safety outcomes and other secondary endpoints were similar between the two arms. Mechanical stapling is faster than hand-suturing for the construction of gastrointestinal anastomoses. Apart from this, stapling and hand-suturing are similar with respect to the outcomes measured in this trial.

  4. Double-blind, randomized, controlled trial of rasagiline as monotherapy in early Parkinson's disease patients.

    PubMed

    Stern, Matthew B; Marek, Kenneth L; Friedman, Joseph; Hauser, Robert A; LeWitt, Peter A; Tarsy, Daniel; Olanow, C Warren

    2004-08-01

    Rasagiline (N-propargyl-1(R)-aminoindan) mesylate is a potent, selective, and irreversible monoamine oxidase-B inhibitor. This study was designed to evaluate the safety, tolerability, and preliminary efficacy of rasagiline monotherapy in early Parkinson's disease (PD) patients not receiving levodopa. The study was performed as a multicenter, parallel-group, double-blind, randomized, placebo-controlled, 10-week study. Fifty-six PD patients were randomly assigned to rasagiline mesylate 1, 2, or 4 mg once daily, or placebo. A 3-week dose-escalation period was followed by a 7-week maintenance phase. At week 10, the mean (+/-SE) changes from baseline in total Unified Parkinson's Disease Rating Scale (UPDRS) score were -1.8 (+/-1.3), -3.6 (+/-1.7), -3.6 (+/-1.2), and -0.5 (+/-0.8) in the rasagiline 1, 2, and 4 mg/day and placebo groups, respectively. Analysis of responders showed that 28% of patients (12 of 43) receiving rasagiline had an improvement in total UPDRS score of greater than 30%, compared with none of the patients receiving placebo (P < 0.05, Fisher's exact test). The frequency and types of adverse events reported by rasagiline-treated and placebo-treated patients were similar. These results suggest that rasagiline monotherapy is well tolerated and efficacious in early PD. Copyright 2004 Movement Disorder Society

  5. Eight Weeks of Cosmos caudatus (Ulam Raja) Supplementation Improves Glycemic Status in Patients with Type 2 Diabetes: A Randomized Controlled Trial.

    PubMed

    Cheng, Shi-Hui; Ismail, Amin; Anthony, Joseph; Ng, Ooi Chuan; Hamid, Azizah Abdul; Barakatun-Nisak, Mohd Yusof

    2015-01-01

    Objectives. Optimizing glycemic control is crucial to prevent type 2 diabetes related complications. Cosmos caudatus is reported to have promising effect in improving plasma blood glucose in an animal model. However, its impact on human remains ambiguous. This study was carried out to evaluate the effectiveness of C. caudatus on glycemic status in patients with type 2 diabetes. Materials and Methods. In this randomized controlled trial with two-arm parallel-group design, a total of 101 subjects with type 2 diabetes were randomly allocated to diabetic-ulam or diabetic controls for eight weeks. Subjects in diabetic-ulam group consumed 15 g of C. caudatus daily for eight weeks while diabetic controls abstained from taking C. caudatus. Both groups received the standard lifestyle advice. Results. After 8 weeks of supplementation, C. caudatus significantly reduced serum insulin (-1.16 versus +3.91), reduced HOMA-IR (-1.09 versus +1.34), and increased QUICKI (+0.05 versus -0.03) in diabetic-ulam group compared with the diabetic controls. HbA1C level was improved although it is not statistically significant (-0.76% versus -0.37%). C. caudatus was safe to consume. Conclusions. C. caudatus supplementation significantly improves insulin resistance and insulin sensitivity in patients with type 2 diabetes.

  6. National Combustion Code: Parallel Performance

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2001-01-01

    This report discusses the National Combustion Code (NCC). The NCC is an integrated system of codes for the design and analysis of combustion systems. The advanced features of the NCC meet designers' requirements for model accuracy and turn-around time. The fundamental features at the inception of the NCC were parallel processing and unstructured mesh. The design and performance of the NCC are discussed.

  7. Ropes: Support for collective opertions among distributed threads

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Cronk, David

    1995-01-01

    Lightweight threads are becoming increasingly useful in supporting parallelism and asynchronous control structures in applications and language implementations. Recently, systems have been designed and implemented to support interprocessor communication between lightweight threads so that threads can be exploited in a distributed memory system. Their use, in this setting, has been largely restricted to supporting latency hiding techniques and functional parallelism within a single application. However, to execute data parallel codes independent of other threads in the system, collective operations and relative indexing among threads are required. This paper describes the design of ropes: a scoping mechanism for collective operations and relative indexing among threads. We present the design of ropes in the context of the Chant system, and provide performance results evaluating our initial design decisions.

  8. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less

  9. Assessment of the radioanatomic positioning of the osteoarthritic knee in serial radiographs: comparison of three acquisition techniques.

    PubMed

    Le Graverand, M-P H; Mazzuca, S; Lassere, M; Guermazi, A; Pickering, E; Brandt, K; Peterfy, C; Cline, G; Nevitt, M; Woodworth, T; Conaghan, P; Vignon, E

    2006-01-01

    Recent studies using various standardized radiographic acquisition techniques have demonstrated the necessity of reproducible radioanatomic alignment of the knee to assure precise measurements of medial tibiofemoral joint space width (JSW). The objective of the present study was to characterize the longitudinal performance of several acquisition techniques with respect to long-term reproducibility of positioning of the knee, and the impact of changes in positioning on the rate and variability of joint space narrowing (JSN). Eighty subjects were randomly selected from each of three cohorts followed in recent studies of the radiographic progression of knee osteoarthritis (OA): the Health ABC study (paired fixed-flexion [FF] radiographs taken at a 36-month interval); the Glucosamine Arthritis Intervention Trial (GAIT) (paired metatarsophalangeal [MTP] radiographs obtained at a 12-month interval), and a randomized clinical trial of doxycycline (fluoroscopically assisted semiflexed anteroposterior (AP) radiographs taken at a 16-month interval). Manual measurements were obtained from each radiograph to represent markers of radioanatomic positioning of the knee (alignment of the medial tibial plateau and X-ray beam, knee rotation, femorotibial angle) and to evaluate minimum JSW (mJSW) in the medial tibiofemoral compartment. The effects on the mean annualized rate of JSN and on the variability of that rate of highly reproduced vs variable positioning of the knee in serial radiographs were evaluated. Parallel or near-parallel alignment was achieved significantly more frequently with the fluoroscopically guided positioning used in the semiflexed AP protocol than with either the non-fluoroscopic FF or MTP protocol (68% vs 14% for both FF and MTP protocols when measured at the midpoint of the medial compartment; 75% vs 26% and 34% for the FF and MTP protocols, respectively, when measured at the site of mJSW; P<0.001 for each). Knee rotation was reproduced more frequently in semiflexed AP radiographs than in FF radiographs (66% vs 45%, P<0.01). In contrast, the FF technique yielded a greater proportion of paired radiographs in which the femorotibial angle was accurately reproduced than the semiflexed AP or MTP protocol (78% vs 59% and 56%, respectively, P<0.01 for each). Notably, only paired radiographs with parallel or near-parallel alignment exhibited a mean rate of JSN (+/-SD) in the OA knee that was more rapid and less variable than that measured in all knees (0.186+/-0.274 mm/year, standardized response to mean [SRM]=0.68 vs 0.128+/-0.291 mm/year, SRM=0.44). This study confirms the importance of parallel radioanatomic alignment of the anterior and posterior margins of the medial tibial plateau in detecting JSN in subjects with knee OA. The use of radiographic methods that assure parallel alignment during serial X-ray examinations will permit the design of more efficient studies of biomarkers of OA progression and of structure modification in knee OA.

  10. Neurophysiological study on the effect of various short durations of deep breathing: A randomized controlled trial.

    PubMed

    Cheng, Kok Suen; Han, Ray P S; Lee, Poh Foong

    2018-02-01

    The study aims to study the effects of short duration deep breathing on the EEG power with topography based on parallel group randomized controlled trial design which was lacking in prior reports. 50 participants were split into 4 groups: control (CONT), deep breathing (DB) for 5 (DB5), 7 (DB7), and 9 (DB9) minutes. EEG recordings were obtained during baseline, deep breathing session, after deep breathing, and a follow-up session after 7 days of consecutive practice. Frontal theta power of DB5 and DB9 was significantly larger than that of CONT after the deep breathing session (p = 0.027 and p = 0.006, respectively) and the profound finding showed that the theta topography obtained a central-focused distribution for DB7 and DB9. The result obtained was consistent with previous literature, albeit for certain deep breathing durations only, indicating a possible linkage between the deep breathing duration and the neurophysiology of the brain. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Effect of Store and Forward Teledermatology on Quality of Life

    PubMed Central

    Whited, John D.; Warshaw, Erin M.; Edison, Karen E.; Kapur, Kush; Thottapurathu, Lizy; Raju, Srihari; Cook, Bethany; Engasser, Holly; Pullen, Samantha; Parks, Patricia; Sindowski, Tom; Motyka, Danuta; Brown, Rodney; Moritz, Thomas E.; Datta, Santanu K.; Chren, Mary-Margaret; Marty, Lucinda; Reda, Domenic J.

    2013-01-01

    Importance Although research on quality of life and dermatologic conditions is well represented in the literature, information on teledermatology’s effect on quality of life is virtually absent. Objective To determine the effect of store and forward teledermatology on quality of life. Design Two-site, parallel-group, superiority randomized controlled trial. Setting Dermatology clinics and affiliated sites of primary care at 2 US Department of Veterans Affairs medical facilities. Participants Patients being referred to a dermatology clinic were randomly assigned, stratified by site, to teledermatology or the conventional consultation process. Among the 392 patients who met the inclusion criteria and were randomized, 326 completed the allocated intervention and were included in the analysis. Interventions Store and forward teledermatology (digital images and a standardized history) or conventional text-based consultation processes were used to manage the dermatology consultations. Patients were followed up for 9 months. Main Outcome Measures The primary end point was change in Skindex-16 scores, a skin-specific quality-of-life instrument, between baseline and 9 months. A secondary end point was change in Skindex-16 scores between baseline and 3 months. Results Patients in both randomization groups demonstrated a clinically significant improvement in Skindex-16 scores between baseline and 9 months with no significant difference by randomization group (P=.66, composite score). No significant difference in Skindex-16 scores by randomization group between baseline and 3 months was found (P=.39, composite score). Conclusions Compared with the conventional consultation process, store and forward teledermatology did not result in a statistically significant difference in skin-related quality of life at 3 or 9 months after referral. Trial Registration clinicaltrials.gov Identifier: NCT00488293 PMID:23426111

  12. The incorporation of focused history in checklist for early recognition and treatment of acute illness and injury.

    PubMed

    Jayaprakash, Namita; Ali, Rashid; Kashyap, Rahul; Bennett, Courtney; Kogan, Alexander; Gajic, Ognjen

    2016-08-31

    Diagnostic error and delay are critical impediments to the safety of critically ill patients. Checklist for early recognition and treatment of acute illness and injury (CERTAIN) has been developed as a tool that facilitates timely and error-free evaluation of critically ill patients. While the focused history is an essential part of the CERTAIN framework, it is not clear how best to choreograph this step in the process of evaluation and treatment of the acutely decompensating patient. An un-blinded crossover clinical simulation study was designed in which volunteer critical care clinicians (fellows and attendings) were randomly assigned to start with either obtaining a focused history choreographed in series (after) or in parallel to the primary survey. A focused history was obtained using the standardized SAMPLE model that is incorporated into American College of Trauma Life Support (ATLS) and Pediatric Advanced Life Support (PALS). Clinicians were asked to assess six acutely decompensating patients using pre - determined clinical scenarios (three in series choreography, three in parallel). Once the initial choreography was completed the clinician would crossover to the alternative choreography. The primary outcome was the cognitive burden assessed through the NASA task load index. Secondary outcome was time to completion of a focused history. A total of 84 simulated cases (42 in parallel, 42 in series) were tested on 14 clinicians. Both the overall cognitive load and time to completion improved with each successive practice scenario, however no difference was observed between the series versus parallel choreographies. The median (IQR) overall NASA TLX task load index for series was 39 (17 - 58) and for parallel 43 (27 - 52), p = 0.57. The median (IQR) time to completion of the tasks in series was 125 (112 - 158) seconds and in parallel 122 (108 - 158) seconds, p = 0.92. In this clinical simulation study assessing the incorporation of a focused history into the primary survey of a non-trauma critically ill patient, there was no difference in cognitive burden or time to task completion when using series choreography (after the exam) compared to parallel choreography (concurrent with the primary survey physical exam). However, with repetition of the task both overall task load and time to completion improved in each of the choreographies.

  13. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  14. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  15. Options for Parallelizing a Planning and Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  16. Scalable Unix commands for parallel processors : a high-performance implementation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, E.; Lusk, E.; Gropp, W.

    2001-06-22

    We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.

  17. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  18. Collimator of multiple plates with axially aligned identical random arrays of apertures

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.; Underwood, J. H. (Inventor)

    1973-01-01

    A collimator is disclosed for examining the spatial location of distant sources of radiation and for imaging by projection, small, near sources of radiation. The collimator consists of a plurality of plates, all of which are pierced with an identical random array of apertures. The plates are mounted perpendicular to a common axis, with like apertures on consecutive plates axially aligned so as to form radiation channels parallel to the common axis. For near sources, the collimator is interposed between the source and a radiation detector and is translated perpendicular to the common axis so as to project radiation traveling parallel to the common axis incident to the detector. For far sources the collimator is scanned by rotating it in elevation and azimuth with a detector to determine the angular distribution of the radiation from the source.

  19. Removal of antibiotics in a parallel-plate thin-film-photocatalytic reactor: Process modeling and evolution of transformation by-products and toxicity.

    PubMed

    Özkal, Can Burak; Frontistis, Zacharias; Antonopoulou, Maria; Konstantinou, Ioannis; Mantzavinos, Dionissios; Meriç, Süreyya

    2017-10-01

    Photocatalytic degradation of sulfamethoxazole (SMX) antibiotic has been studied under recycling batch and homogeneous flow conditions in a thin-film coated immobilized system namely parallel-plate (PPL) reactor. Experimentally designed, statistically evaluated with a factorial design (FD) approach with intent to provide a mathematical model takes into account the parameters influencing process performance. Initial antibiotic concentration, UV energy level, irradiated surface area, water matrix (ultrapure and secondary treated wastewater) and time, were defined as model parameters. A full of 2 5 experimental design was consisted of 32 random experiments. PPL reactor test experiments were carried out in order to set boundary levels for hydraulic, volumetric and defined defined process parameters. TTIP based thin-film with polyethylene glycol+TiO 2 additives were fabricated according to pre-described methodology. Antibiotic degradation was monitored by High Performance Liquid Chromatography analysis while the degradation products were specified by LC-TOF-MS analysis. Acute toxicity of untreated and treated SMX solutions was tested by standard Daphnia magna method. Based on the obtained mathematical model, the response of the immobilized PC system is described with a polynomial equation. The statistically significant positive effects are initial SMX concentration, process time and the combined effect of both, while combined effect of water matrix and irradiated surface area displays an adverse effect on the rate of antibiotic degradation by photocatalytic oxidation. Process efficiency and the validity of the acquired mathematical model was also verified for levofloxacin and cefaclor antibiotics. Immobilized PC degradation in PPL reactor configuration was found capable of providing reduced effluent toxicity by simultaneous degradation of SMX parent compound and TBPs. Copyright © 2017. Published by Elsevier B.V.

  20. A portable low-cost 3D point cloud acquiring method based on structure light

    NASA Astrophysics Data System (ADS)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  1. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  2. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  3. A random rule model of surface growth

    NASA Astrophysics Data System (ADS)

    Mello, Bernardo A.

    2015-02-01

    Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.

  4. An Efficient Multicore Implementation of a Novel HSS-Structured Multifrontal Solver Using Randomized Sampling

    DOE PAGES

    Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...

    2016-10-27

    Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less

  5. Design Method of Digital Optimal Control Scheme and Multiple Paralleled Bridge Type Current Amplifier for Generating Gradient Magnetic Fields in MRI Systems

    NASA Astrophysics Data System (ADS)

    Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo

    This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.

  6. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  7. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : technical summary.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  8. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : final report.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  9. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  10. Simplified Parallel Domain Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less

  11. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  12. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  13. Language Classification using N-grams Accelerated by FPGA-based Bloom Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, A; Gokhale, M

    N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.

  14. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  15. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  16. Acupuncture for sequelae of Bell's palsy: a randomized controlled trial protocol

    PubMed Central

    2011-01-01

    Objective Incomplete recovery from facial palsy has a long-term impact on the quality of life, and medical options for the sequelae of Bell's palsy are limited. Invasive treatments and physiotherapy have been employed to relieve symptoms, but there is limited clinical evidence for their effectiveness. Acupuncture is widely used on Bell's palsy patients in East Asia, but there is insufficient evidence for its effectiveness on Bell's palsy sequelae. The objective is to evaluate the efficacy and safety of acupuncture in patients with sequelae of Bell's palsy. Method/Design This study consists of a randomized controlled trial with two parallel arms: an acupuncture group and a waitlist group. The acupuncture group will receive acupuncture treatment three times per week for a total of 24 sessions over 8 weeks. Participants in the waitlist group will not receive any acupuncture treatments during this 8 week period, but they will participate in the evaluations of symptoms at the start of the study, at 5 weeks and at 8 weeks after randomization, at which point the same treatment as the acupuncture group will be provided. The primary outcome will be analyzed by the change in the Facial Disability Index (FDI) from baseline to week eight. The secondary outcome measures will include FDI from baseline to week five, House-Brackmann Grade, lip mobility, and stiffness scales. Trial registration Current Controlled-Trials ISRCTN43104115; registration date: 06 July 2010; the date of the first patient's randomization: 04 August 2010 PMID:21388554

  17. Comprehensive, Individualized, Person-Centered Management of Community-Residing Persons with Moderate-to-Severe Alzheimer Disease: A Randomized Controlled Trial.

    PubMed

    Reisberg, Barry; Shao, Yongzhao; Golomb, James; Monteiro, Isabel; Torossian, Carol; Boksay, Istvan; Shulman, Melanie; Heller, Sloane; Zhu, Zhaoyin; Atif, Ayesha; Sidhu, Jaskirat; Vedvyas, Alok; Kenowsky, Sunnie

    2017-01-01

    The aim was to examine added benefits of a Comprehensive, Individualized, Person-Centered Management (CI-PCM) program to memantine treatment. This was a 28-week, clinician-blinded, randomized, controlled, parallel-group study, with a similar study population, similar eligibility criteria, and a similar design to the memantine pivotal trial of Reisberg et al. [N Engl J Med 2003;348:1333-1341]. Twenty eligible community-residing Alzheimer disease (AD) subject-caregiver dyads were randomized to the CI-PCM program (n = 10) or to usual community care (n = 10). Primary outcomes were the New York University Clinician's Interview-Based Impression of Change Plus Caregiver Input (NYU-CIBIC-Plus), assessed by one clinician set, and an activities of daily living inventory, assessed by a separate clinician set at baseline and at weeks 4, 12, and 28. Primary outcomes showed significant benefits of the CI-PCM program at all post-baseline evaluations. Improvement on the NYU-CIBIC-Plus in the management group at 28 weeks was 2.9 points over the comparator group. The memantine 2003 trial showed an improvement of 0.3 points on this global measure in memantine-treated versus placebo-randomized subjects at 28 weeks. Hence, globally, the management program intervention benefits were 967% greater than memantine treatment alone. These results are approximately 10 times those usually observed with both nonpharmacological and pharmacological treatments and indicate substantial benefits with the management program for advanced AD persons. © 2017 S. Karger AG, Basel.

  18. Efficacy and safety of teneligliptin, a novel dipeptidyl peptidase-4 inhibitor, in Korean patients with type 2 diabetes mellitus: a 24-week multicentre, randomized, double-blind, placebo-controlled phase III trial.

    PubMed

    Hong, S; Park, C-Y; Han, K A; Chung, C H; Ku, B J; Jang, H C; Ahn, C W; Lee, M-K; Moon, M K; Son, H S; Lee, C B; Cho, Y-W; Park, S-W

    2016-05-01

    We assessed the 24-week efficacy and safety of teneligliptin, a novel dipeptidyl peptidase-4 inhibitor, in Korean patients with type 2 diabetes mellitus (T2DM) that was inadequately controlled with diet and exercise. The present study was designed as a multicentre, randomized, double-blind, placebo-controlled, parallel-group, phase III study. Patients (n = 142) were randomized 2 : 1 into two different treatment groups as follows: 99 received teneligliptin (20 mg) and 43 received placebo. The primary endpoint was change in glycated haemoglobin (HbA1c) level from baseline to week 24. Teneligliptin significantly reduced the HbA1c level from baseline compared with placebo after 24 weeks. At week 24, the differences between changes in HbA1c and fasting plasma glucose (FBG) in the teneligliptin and placebo groups were -0.94% [least-squares (LS) mean -1.22, -0.65] and -1.21 mmol/l (-1.72, -0.70), respectively (all p < 0.001). The incidence of hypoglycaemia and adverse events were not significantly different between the two groups. This phase III, randomized, placebo-controlled study provides evidence of the safety and efficacy of 24 weeks of treatment with teneligliptin as a monotherapy in Korean patients with T2DM. © 2016 The Authors. Diabetes, Obesity and Metabolism published by John Wiley & Sons Ltd.

  19. Centrifugal multiplexing fixed-volume dispenser on a plastic lab-on-a-disk for parallel biochemical single-end-point assays

    PubMed Central

    La, Moonwoo; Park, Sang Min; Kim, Dong Sung

    2015-01-01

    In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516

  20. Topical minoxidil: cardiac effects in bald man.

    PubMed Central

    Leenen, F H; Smith, D L; Unger, W P

    1988-01-01

    Systemic cardiovascular effects during chronic treatment with topical minoxidil vs placebo were evaluated using a double-blind, randomized design for two parallel groups (n = 20 for minoxidil, n = 15 for placebo). During 6 months of follow-up, blood pressure did not change, whereas minoxidil increased heart rate by 3-5 beats min-1. Compared with placebo, topical minoxidil caused significant increases in LV end-diastolic volume, in cardiac output (by 0.751 min-1) and in LV mass (by 5 g m-2). We conclude that in healthy subjects short-term use of topical minoxidil is likely not to be detrimental. However, safety needs to be established regarding ischaemic symptoms in patients with coronary artery disease as well as for the possible development of LV hypertrophy in healthy subjects during years of therapy. PMID:3191000

  1. Research on the adaptive optical control technology based on DSP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun

    2018-02-01

    Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.

  2. The Great Smoky Mountains Study of Youth. Goals, design, methods, and the prevalence of DSM-III-R disorders.

    PubMed

    Costello, E J; Angold, A; Burns, B J; Stangl, D K; Tweed, D L; Erkanli, A; Worthman, C M

    1996-12-01

    The Great Smoky Mountains Study of youth focuses on the relationship between the development of psychiatric disorder and the need for and use of mental health services. A multistage, overlapping cohorts design was used, in which 4500 of the 11758 children aged 9, 11, and 13 years in an 11-county area of the southeastern United States were randomly selected for screening for psychiatric symptoms. Children who scored in the top 25% on the screening questionnaire, together with a 1 in 10 random sample of the rest, were recruited for 4 waves of intensive, annual interviews (n = 1015 at wave 1). In a parallel study, all American Indian children aged 9, 11, and 13 years were recruited (N = 323 at wave 1). The 3-month prevalence (+/-SE) of any DSM-III-R axis I disorder in the main sample, weighted to reflect population prevalence rates, was 20.3% +/- 1.7%. The most common diagnoses were anxiety disorders (5.7% +/- 1.0%), enuresis (5.1% +/- 1.0%), tic disorders (4.2% +/- 0.9%), conduct disorder (3.3% +/- 0.6%), oppositional defiant disorder (2.7% +/- 0.4%), and hyperactivity (1.9% +/- 0.4%). The prevalence of psychiatric disorder in this rural sample was similar to rates reported in other recent studies. Poverty was the strongest demographic correlate of diagnosis, in both urban and rural children.

  3. Methylphenidate for attention deficit hyperactivity disorder and drug relapse in criminal offenders with substance dependence: a 24-week randomized placebo-controlled trial

    PubMed Central

    Konstenius, Maija; Jayaram-Lindström, Nitya; Guterstam, Joar; Beck, Olof; Philips, Björn; Franck, Johan

    2014-01-01

    Aim To test the efficacy and safety of osmotic release oral system (OROS) methylphenidate (MPH) in doses up to 180 mg/day to treat attention deficit hyperactivity disorder (ADHD) and prevent any drug relapse in individuals with a co-diagnosis of ADHD and amphetamine dependence. Design Randomized placebo-controlled 24-week double-blind trial with parallel groups design. Setting Participants were recruited from medium security prisons in Sweden. The medication started within 2 weeks before release from prison and continued in out-patient care with twice-weekly visits, including once-weekly cognitive behavioural therapy. Participants Fifty-four men with a mean age of 42 years, currently incarcerated, meeting DSM-IV criteria for ADHD and amphetamine dependence. Measurements Change in self-reported ADHD symptoms, relapse to any drug use (amphetamine and other drugs) measured by urine toxicology, retention to treatment, craving and time to relapse. Findings The MPH-treated group reduced their ADHD symptoms during the trial (P = 0.011) and had a significantly higher proportion of drug-negative urines compared with the placebo group (P = 0.047), including more amphetamine-negative urines (P = 0.019) and better retention to treatment (P = 0.032). Conclusions Methylphenidate treatment reduces attention deficit hyperactivity disorder symptoms and the risk for relapse to substance use in criminal offenders with attention deficit hyperactivity disorder and substance dependence. PMID:24118269

  4. Effects of protein supplements consumed with meals, versus between meals, on resistance training-induced body composition changes in adults: a systematic review.

    PubMed

    Hudson, Joshua L; Bergia, Robert E; Campbell, Wayne W

    2018-06-01

    The impact of timing the consumption of protein supplements in relation to meals on resistance training-induced changes in body composition has not been evaluated systematically. The aim of this systematic review was to assess the effect of consuming protein supplements with meals, vs between meals, on resistance training-induced body composition changes in adults. Studies published up to 2017 were identified with the PubMed, Scopus, Cochrane, and CINAHL databases. Two researchers independently screened 2077 abstracts for eligible randomized controlled trials of parallel design that prescribed a protein supplement and measured changes in body composition for a period of 6 weeks or more. In total, 34 randomized controlled trials with 59 intervention groups were included and qualitatively assessed. Of the intervention groups designated as consuming protein supplements with meals (n = 16) vs between meals (n = 43), 56% vs 72% showed an increase in body mass, 94% vs 90% showed an increase in lean mass, 87% vs 59% showed a reduction in fat mass, and 100% vs 84% showed an increase in the ratio of lean mass to fat mass over time, respectively. Concurrently with resistance training, consuming protein supplements with meals, rather than between meals, may more effectively promote weight control and reduce fat mass without influencing improvements in lean mass.

  5. A Web-Based Intervention to Reduce Indoor Tanning Motivations In Adolescents: A Randomized Controlled Trial

    PubMed Central

    Hillhouse, Joel; Turrisi, Rob; Scaglione, Nichole M.; Cleveland, Michael J.; Baker, Katie; Florence, L. Carter

    2016-01-01

    Youthful indoor tanning as few as ten sessions can increase the risk of melanoma by 2 to 4 times with each additional session adding another 2% to the risk. Recent research estimates that indoor tanning can be linked to approximately 450,000 cases of skin cancer annually in the United States, Europe, and Australia. Despite these risks, indoor tanning remains popular with adolescents. This study tested the efficacy of a web-based skin cancer prevention intervention designed to reduce indoor tanning motivations in adolescent females. A nationally representative sample of 443 female teens were enrolled from an online panel into a two-arm, parallel group design, randomized controlled trial. Treatment participants received an appearance-focused intervention grounded in established health behavior change models. Controls viewed a teen alcohol prevention website. Outcome variables included willingness and intentions to indoor tan, willingness to sunless tan and measures of indoor tanning attitudes and beliefs. The intervention decreased willingness and intentions to indoor tan and increased sunless tanning willingness relative to controls. We also examined indirect mechanisms of change through intervening variables (e.g., indoor tanning attitudes, norms, positive and negative expectancies) using the product of coefficients approach. The web-based intervention demonstrated efficacy in changing adolescent indoor tanning motivations and improving their orientation toward healthier alternatives. Results from the intervening variable analyses give guidance to future adolescent skin cancer prevention interventions. PMID:27549602

  6. A Web-Based Intervention to Reduce Indoor Tanning Motivations in Adolescents: a Randomized Controlled Trial.

    PubMed

    Hillhouse, Joel; Turrisi, Rob; Scaglione, Nichole M; Cleveland, Michael J; Baker, Katie; Florence, L Carter

    2017-02-01

    Youthful indoor tanning as few as ten sessions can increase the risk of melanoma by two to four times with each additional session adding another 2 % to the risk. Recent research estimates that indoor tanning can be linked to approximately 450,000 cases of skin cancer annually in the USA, Europe, and Australia. Despite these risks, indoor tanning remains popular with adolescents. This study tested the efficacy of a web-based skin cancer prevention intervention designed to reduce indoor tanning motivations in adolescent females. A nationally representative sample of 443 female teens was enrolled from an online panel into a two-arm, parallel group design, randomized controlled trial. Treatment participants received an appearance-focused intervention grounded in established health behavior change models. Controls viewed a teen alcohol prevention website. Outcome variables included willingness and intentions to indoor tan, willingness to sunless tan, and measures of indoor tanning attitudes and beliefs. The intervention decreased willingness and intentions to indoor tan and increased sunless tanning willingness relative to controls. We also examined indirect mechanisms of change through intervening variables (e.g., indoor tanning attitudes, norms, positive and negative expectancies) using the product of coefficient approach. The web-based intervention demonstrated efficacy in changing adolescent indoor tanning motivations and improving their orientation toward healthier alternatives. Results from the intervening variable analyses give guidance to future adolescent skin cancer prevention interventions.

  7. The Effects of Iron and/or Zinc Supplementation on Maternal Reports of Sleep in Infants from Nepal and Zanzibar

    PubMed Central

    Kordas, Katarzyna; Siegel, Emily H.; Olney, Deanna K.; Katz, Joanne; Tielsch, James M.; Kariger, Patricia K.; Khalfan, Sabra S.; LeClerq, Steven C.; Khatry, Subarna K.; Stoltzfus, Rebecca J.

    2009-01-01

    Background There is some evidence that sleep patterns may be affected by iron deficiency anemia but the role of iron in sleep has not been tested in a randomized iron supplementation trial. Objective We investigated the effect of iron supplementation on maternal reports of sleep in infants in 2 randomized, placebo-controlled trials from Pemba Island, Zanzibar, and Nepal. Design In both studies, which had parallel designs and were carried out in years 2002 to 2003, infants received iron–folic acid with or without zinc daily for 12 months, and assessments of development were made every 3 months for the duration of the study. Eight hundred seventy-seven Pemban (12.5 ± 4.0 months old) and 567 Nepali (10.8 ± 4.0 months) infants participated. Maternal reports of sleep patterns (napping frequency and duration, nighttime sleep duration, frequency of night waking) were collected. Results Mean Hb concentration was 9.2 ± 1.1 for Pemban and 10.1 ± 1.2 g/dL for Nepali infants. Approximately, one-third of the children were stunted. Supplemental iron was consistently associated with longer night and total sleep duration. The effects of zinc supplementation also included longer sleep duration. Conclusions Micronutrient supplementation in infants at high risk for iron deficiency and iron deficiency anemia was related to increased night sleep duration and less night waking. PMID:19322104

  8. A noninferiority confirmatory trial of prasugrel versus clopidogrel in Japanese patients with non-cardioembolic stroke: rationale and study design for a randomized controlled trial - PRASTRO-I trial.

    PubMed

    Nagao, Takehiko; Toyoda, Kazunori; Kitagawa, Kazuo; Kitazono, Takanari; Yamagami, Hiroshi; Uchiyama, Shinichiro; Tanahashi, Norio; Matsumoto, Masayasu; Minematsu, Kazuo; Nagata, Izumi; Nishikawa, Masakatsu; Nanto, Shinsuke; Abe, Kenji; Ikeda, Yasuo; Ogawa, Akira

    2018-04-01

    This comparison of PRAsugrel and clopidogrel in Japanese patients with ischemic STROke (PRASTRO)-I trial investigates the noninferiority of prasugrel to clopidogrel sulfate in the prevention of recurrence of primary events (ischemic stroke, myocardial infarction, and death from other vascular causes), and the long-term safety of prasugrel in Japanese patients with non-cardioembolic stroke. This was an active-controlled, randomized, double-blind, double-dummy, parallel-group study conducted between July 2011 and March 2016 at multiple centers around Japan. Patients had to meet eligibility criteria before receiving 3.75 mg prasugrel or 75 mg clopidogrel orally once daily for a period of 96-104 weeks. A total of 3747 patients were included in this trial; 1598 in the 3.75 mg prasugrel group and 1551 in the 75 mg clopidogrel group completed the study. During the study period, 287 (15.2%) patients in the prasugrel group and 311 (16.7%) in the clopidogrel group discontinued treatment. Baseline characteristics, safety, and efficacy results are forthcoming and will be published separately. This article presents the study design and rationale for a trial investigating the noninferiority of prasugrel to clopidogrel sulfate with regards to the inhibitory effect on primary events in patients with non-cardioembolic stroke.

  9. Power reduction by power gating in differential pair type spin-transfer-torque magnetic random access memories for low-power nonvolatile cache memories

    NASA Astrophysics Data System (ADS)

    Ohsawa, Takashi; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo

    2014-01-01

    Array operation currents in spin-transfer-torque magnetic random access memories (STT-MRAMs) that use four differential pair type magnetic tunnel junction (MTJ)-based memory cells (4T2MTJ, two 6T2MTJs and 8T2MTJ) are simulated and compared with that in SRAM. With L3 cache applications in mind, it is assumed that the memories are composed of 32 Mbyte capacity to be accessed in 64 byte in parallel. All the STT-MRAMs except for the 8T2MTJ one are designed with 32 bit fine-grained power gating scheme applied to eliminate static currents in the memory cells that are not accessed. The 8T2MTJ STT-MRAM, the cell’s design concept being not suitable for the fine-grained power gating, loads and saves 32 Mbyte data in 64 Mbyte unit per 1 Mbit sub-array in 2 × 103 cycles. It is shown that the array operation current of the 4T2MTJ STT-MRAM is 70 mA averaged in 15 ns write cycles at Vdd = 0.9 V. This is the smallest among the STT-MRAMs, about the half of the low standby power (LSTP) SRAM whose array operation current is totally dominated by the cells’ subthreshold leakage.

  10. Effect of Dextromethorphan-Quinidine on Agitation in Patients With Alzheimer Disease Dementia: A Randomized Clinical Trial.

    PubMed

    Cummings, Jeffrey L; Lyketsos, Constantine G; Peskind, Elaine R; Porsteinsson, Anton P; Mintzer, Jacobo E; Scharre, Douglas W; De La Gandara, Jose E; Agronin, Marc; Davis, Charles S; Nguyen, Uyen; Shin, Paul; Tariot, Pierre N; Siffert, João

    Agitation is common among patients with Alzheimer disease; safe, effective treatments are lacking. To assess the efficacy, safety, and tolerability of dextromethorphan hydrobromide-quinidine sulfate for Alzheimer disease-related agitation. Phase 2 randomized, multicenter, double-blind, placebo-controlled trial using a sequential parallel comparison design with 2 consecutive 5-week treatment stages conducted August 2012-August 2014. Patients with probable Alzheimer disease, clinically significant agitation (Clinical Global Impressions-Severity agitation score ≥4), and a Mini-Mental State Examination score of 8 to 28 participated at 42 US study sites. Stable dosages of antidepressants, antipsychotics, hypnotics, and antidementia medications were allowed. In stage 1, 220 patients were randomized in a 3:4 ratio to receive dextromethorphan-quinidine (n = 93) or placebo (n = 127). In stage 2, patients receiving dextromethorphan-quinidine continued; those receiving placebo were stratified by response and rerandomized in a 1:1 ratio to dextromethorphan-quinidine (n = 59) or placebo (n = 60). The primary end point was change from baseline on the Neuropsychiatric Inventory (NPI) Agitation/Aggression domain (scale range, 0 [absence of symptoms] to 12 [symptoms occur daily and with marked severity]). A total of 194 patients (88.2%) completed the study. With the sequential parallel comparison design, 152 patients received dextromethorphan-quinidine and 127 received placebo during the study. Analysis combining stages 1 (all patients) and 2 (rerandomized placebo nonresponders) showed significantly reduced NPI Agitation/Aggression scores for dextromethorphan-quinidine vs placebo (ordinary least squares z statistic, -3.95; P < .001). In stage 1, mean NPI Agitation/Aggression scores were reduced from 7.1 to 3.8 with dextromethorphan-quinidine and from 7.0 to 5.3 with placebo. Between-group treatment differences were significant in stage 1 (least squares mean, -1.5; 95% CI, -2.3 to -0.7; P<.001). In stage 2, NPI Agitation/Aggression scores were reduced from 5.8 to 3.8 with dextromethorphan-quinidine and from 6.7 to 5.8 with placebo. Between-group treatment differences were also significant in stage 2 (least squares mean, -1.6; 95% CI, -2.9 to -0.3; P=.02). Adverse events included falls (8.6% for dextromethorphan-quinidine vs 3.9% for placebo), diarrhea (5.9% vs 3.1% respectively), and urinary tract infection (5.3% vs 3.9% respectively). Serious adverse events occurred in 7.9% with dextromethorphan-quinidine vs 4.7% with placebo. Dextromethorphan-quinidine was not associated with cognitive impairment, sedation, or clinically significant QTc prolongation. In this preliminary 10-week phase 2 randomized clinical trial of patients with probable Alzheimer disease, combination dextromethorphan-quinidine demonstrated clinically relevant efficacy for agitation and was generally well tolerated. clinicaltrials.gov Identifier: NCT01584440.

  11. Load Balancing in Stochastic Networks: Algorithms, Analysis, and Game Theory

    DTIC Science & Technology

    2014-04-16

    SECURITY CLASSIFICATION OF: The classic randomized load balancing model is the so-called supermarket model, which describes a system in which...P.O. Box 12211 Research Triangle Park, NC 27709-2211 mean-field limits, supermarket model, thresholds, game, randomized load balancing REPORT...balancing model is the so-called supermarket model, which describes a system in which customers arrive to a service center with n parallel servers according

  12. Random Number Generation for High Performance Computing

    DTIC Science & Technology

    2015-01-01

    number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons

  13. Omega 3/6 Fatty Acids for Reading in Children: A Randomized, Double-Blind, Placebo-Controlled Trial in 9-Year-Old Mainstream Schoolchildren in Sweden

    ERIC Educational Resources Information Center

    Johnson, Mats; Fransson, Gunnar; Östlund, Sven; Areskoug, Björn; Gillberg, Christopher

    2017-01-01

    Background: Previous research has shown positive effects of Omega 3/6 fatty acids in children with inattention and reading difficulties. We aimed to investigate if Omega 3/6 improved reading ability in mainstream schoolchildren. Methods: We performed a 3-month parallel, randomized, double-blind, placebo-controlled trial followed by 3-month active…

  14. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial

    PubMed Central

    Isiordia-Espinoza, Mario-Alberto; Martinez-Rider, Ricardo; Perez-Urizar, Jose

    2016-01-01

    Background Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. Material and Methods A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Results Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. Conclusions According to the VAS and AUC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery. Key words:Ketorolac, tramadol, third molar surgery, pain, preemptive analgesia. PMID:27475688

  15. Comparing oncology clinical programs by use of innovative designs and expected net present value optimization: Which adaptive approach leads to the best result?

    PubMed

    Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen

    2017-01-01

    Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.

  16. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  17. SNSPD with parallel nanowires (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ejrnaes, Mikkel; Parlato, Loredana; Gaggero, Alessandro; Mattioli, Francesco; Leoni, Roberto; Pepe, Giampiero; Cristiano, Roberto

    2017-05-01

    Superconducting nanowire single-photon detectors (SNSPDs) have shown to be promising in applications such as quantum communication and computation, quantum optics, imaging, metrology and sensing. They offer the advantages of a low dark count rate, high efficiency, a broadband response, a short time jitter, a high repetition rate, and no need for gated-mode operation. Several SNSPD designs have been proposed in literature. Here, we discuss the so-called parallel nanowires configurations. They were introduced with the aim of improving some SNSPD property like detection efficiency, speed, signal-to-noise ratio, or photon number resolution. Although apparently similar, the various parallel designs are not the same. There is no one design that can improve the mentioned properties all together. In fact, each design presents its own characteristics with specific advantages and drawbacks. In this work, we will discuss the various designs outlining peculiarities and possible improvements.

  18. "A Vegetarian vs. Conventional Hypocaloric Diet: The Effect on Physical Fitness in Response to Aerobic Exercise in Patients with Type 2 Diabetes." A Parallel Randomized Study.

    PubMed

    Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana

    2016-10-26

    It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (-500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO 2max ) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η ² = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η ² = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η ² = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η ² = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program.

  19. A Behavior-Based Intervention That Prevents Sexual Assault: the Results of a Matched-Pairs, Cluster-Randomized Study in Nairobi, Kenya.

    PubMed

    Baiocchi, Michael; Omondi, Benjamin; Langat, Nickson; Boothroyd, Derek B; Sinclair, Jake; Pavia, Lee; Mulinge, Munyae; Githua, Oscar; Golden, Neville H; Sarnquist, Clea

    2017-10-01

    The study's design was a cluster-randomized, matched-pairs, parallel trial of a behavior-based sexual assault prevention intervention in the informal settlements. The participants were primary school girls aged 10-16. Classroom-based interventions for girls and boys were delivered by instructors from the same settlements, at the same time, over six 2-h sessions. The girls' program had components of empowerment, gender relations, and self-defense. The boys' program promotes healthy gender norms. The control arm of the study received a health and hygiene curriculum. The primary outcome was the rate of sexual assault in the prior 12 months at the cluster level (school level). Secondary outcomes included the generalized self-efficacy scale, the distribution of number of times victims were sexually assaulted in the prior period, skills used, disclosure rates, and distribution of perpetrators. Difference-in-differences estimates are reported with bootstrapped confidence intervals. Fourteen schools with 3147 girls from the intervention group and 14 schools with 2539 girls from the control group were included in the analysis. We estimate a 3.7 % decrease, p = 0.03 and 95 % CI = (0.4, 8.0), in risk of sexual assault in the intervention group due to the intervention (initially 7.3 % at baseline). We estimate an increase in mean generalized self-efficacy score of 0.19 (baseline average 3.1, on a 1-4 scale), p = 0.0004 and 95 % CI = (0.08, 0.39). This innovative intervention that combined parallel training for young adolescent girls and boys in school settings showed significant reduction in the rate of sexual assault among girls in this population.

  20. Algorithms for the Construction of Parallel Tests by Zero-One Programming. Project Psychometric Aspects of Item Banking No. 7. Research Report 86-7.

    ERIC Educational Resources Information Center

    Boekkooi-Timminga, Ellen

    Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…

  1. Design and implementation of a novel modal space active force control concept for spatial multi-DOF parallel robotic manipulators actuated by electrical actuators.

    PubMed

    Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K

    2018-01-01

    Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint space control and work space control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  4. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  5. Effectiveness of a smartphone application for improving healthy lifestyles, a randomized clinical trial (EVIDENT II): study protocol

    PubMed Central

    2014-01-01

    Background New technologies could facilitate changes in lifestyle and improve public health. However, no large randomized, controlled studies providing scientific evidence of the benefits of their use have been made. The aims of this study are to develop and validate a smartphone application, and to evaluate the effect of adding this tool to a standardized intervention designed to improve adherence to the Mediterranean diet and to physical activity. An evaluation is also made of the effect of modifying habits upon vascular structure and function, and therefore on arterial aging. Methods/Design A randomized, double-blind, multicenter, parallel group clinical trial will be carried out. A total of 1215 subjects under 70 years of age from the EVIDENT trial will be included. Counseling common to both groups (control and intervention) will be provided on adaptation to the Mediterranean diet and on physical activity. The intervention group moreover will receive training on the use of a smartphone application designed to promote a healthy diet and increased physical activity, and will use the application for three months. The main study endpoints will be the changes in physical activity, assessed by accelerometer and the 7-day Physical Activity Recall (PAR) interview, and adaptation to the Mediterranean diet, as evaluated by an adherence questionnaire and a food frequency questionnaire (FFQ). Evaluation also will be made of vascular structure and function based on central arterial pressure, the radial augmentation index, pulse velocity, the cardio-ankle vascular index, and carotid intima-media thickness. Discussion Confirmation that the new technologies are useful for promoting healthier lifestyles and that their effects are beneficial in terms of arterial aging will have important clinical implications, and may contribute to generalize their application in favor of improved population health. Trial registration Clinical Trials.gov Identifier: NCT02016014 PMID:24628961

  6. Screen-time Weight-loss Intervention Targeting Children at Home (SWITCH): A randomized controlled trial study protocol

    PubMed Central

    2011-01-01

    Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33%) do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention) or control (no change). At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI) and standardized body mass index (zBMI). Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous), Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight management programs. Trial Registration ACTRN12611000164998 PMID:21718543

  7. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  8. Memory-based frame synchronizer. [for digital communication systems

    NASA Technical Reports Server (NTRS)

    Stattel, R. J.; Niswander, J. K. (Inventor)

    1981-01-01

    A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.

  9. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  10. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  11. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  12. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A; Mamidala, Amith R

    2014-02-11

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  13. A Generic Mesh Data Structure with Parallel Applications

    ERIC Educational Resources Information Center

    Cochran, William Kenneth, Jr.

    2009-01-01

    High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…

  14. Sequential color video to parallel color video converter

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.

  15. Pilot study assessing the feasibility of applying bilateral subthalamic nucleus deep brain stimulation in very early stage Parkinson's disease: study design and rationale.

    PubMed

    Charles, David; Tolleson, Christopher; Davis, Thomas L; Gill, Chandler E; Molinari, Anna L; Bliton, Mark J; Tramontana, Michael G; Salomon, Ronald M; Kao, Chris; Wang, Lily; Hedera, Peter; Phibbs, Fenna T; Neimat, Joseph S; Konrad, Peter E

    2012-01-01

    Deep brain stimulation provides significant symptomatic benefit for people with advanced Parkinson's disease whose symptoms are no longer adequately controlled with medication. Preliminary evidence suggests that subthalamic nucleus stimulation may also be efficacious in early Parkinson's disease, and results of animal studies suggest that it may spare dopaminergic neurons in the substantia nigra. We report the methodology and design of a novel Phase I clinical trial testing the safety and tolerability of deep brain stimulation in early Parkinson's disease and discuss previous failed attempts at neuroprotection. We recently conducted a prospective, randomized, parallel-group, single-blind pilot clinical trial of deep brain stimulation in early Parkinson's disease. Subjects were randomized to receive either optimal drug therapy or deep brain stimulation plus optimal drug therapy. Follow-up visits occurred every six months for a period of two years and included week-long therapy washouts. Thirty subjects with Hoehn & Yahr Stage II idiopathic Parkinson's disease were enrolled over a period of 32 months. Twenty-nine subjects completed all follow-up visits; one patient in the optimal drug therapy group withdrew from the study after baseline. Baseline characteristics for all thirty patients were not significantly different. This study demonstrates that it is possible to recruit and retain subjects in a clinical trial testing deep brain stimulation in early Parkinson's disease. The results of this trial will be used to support the design of a Phase III, multicenter trial investigating the efficacy of deep brain stimulation in early Parkinson's disease.

  16. Pilot Study Assessing the Feasibility of Applying Bilateral Subthalamic Nucleus Deep Brain Stimulation in Very Early Stage Parkinson's Disease: Study design and rationale

    PubMed Central

    Charles, David; Tolleson, Christopher; Davis, Thomas L.; Gill, Chandler E.; Molinari, Anna L.; Bliton, Mark J.; Tramontana, Michael G.; Salomon, Ronald M.; Kao, Chris; Wang, Lily; Hedera, Peter; Phibbs, Fenna T.; Neimat, Joseph S.; Konrad, Peter E.

    2014-01-01

    Background Deep brain stimulation provides significant symptomatic benefit for people with advanced Parkinson's disease whose symptoms are no longer adequately controlled with medication. Preliminary evidence suggests that subthalamic nucleus stimulation may also be efficacious in early Parkinson's disease, and results of animal studies suggest that it may spare dopaminergic neurons in the substantia nigra. Objective We report the methodology and design of a novel Phase I clinical trial testing the safety and tolerability of deep brain stimulation in early Parkinson's disease and discuss previous failed attempts at neuroprotection. Methods We recently conducted a prospective, randomized, parallel-group, single-blind pilot clinical trial of deep brain stimulation in early Parkinson's disease. Subjects were randomized to receive either optimal drug therapy or deep brain stimulation plus optimal drug therapy. Follow-up visits occurred every six months for a period of two years and included week-long therapy washouts. Results Thirty subjects with Hoehn & Yahr Stage II idiopathic Parkinson's disease were enrolled over a period of 32 months. Twenty-nine subjects completed all follow-up visits; one patient in the optimal drug therapy group withdrew from the study after baseline. Baseline characteristics for all thirty patients were not significantly different. Conclusions This study demonstrates that it is possible to recruit and retain subjects in a clinical trial testing deep brain stimulation in early Parkinson's disease. The results of this trial will be used to support the design of a Phase III, multicenter trial investigating the efficacy of deep brain stimulation in early Parkinson's disease. PMID:23938229

  17. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  18. Strategies to Reduce Injuries and Develop Confidence in Elders (STRIDE): A Cluster-Randomized Pragmatic Trial of a Multifactorial Fall Injury Prevention Strategy: Design and Methods.

    PubMed

    Bhasin, Shalender; Gill, Thomas M; Reuben, David B; Latham, Nancy K; Gurwitz, Jerry H; Dykes, Patricia; McMahon, Siobhan; Storer, Thomas W; Duncan, Pamela W; Ganz, David A; Basaria, Shehzad; Miller, Michael E; Travison, Thomas G; Greene, Erich J; Dziura, James; Esserman, Denise; Allore, Heather; Carnie, Martha B; Fagan, Maureen; Hanson, Catherine; Baker, Dorothy; Greenspan, Susan L; Alexander, Neil; Ko, Fred; Siu, Albert L; Volpi, Elena; Wu, Albert W; Rich, Jeremy; Waring, Stephen C; Wallace, Robert; Casteel, Carri; Magaziner, Jay; Charpentier, Peter; Lu, Charles; Araujo, Katy; Rajeevan, Haseena; Margolis, Scott; Eder, Richard; McGloin, Joanne M; Skokos, Eleni; Wiggins, Jocelyn; Garber, Lawrence; Clauser, Steven B; Correa-De-Araujo, Rosaly; Peduzzi, Peter

    2017-10-14

    Fall injuries are a major cause of morbidity and mortality among older adults. We describe the design of a pragmatic trial to compare the effectiveness of an evidence-based, patient-centered multifactorial fall injury prevention strategy to an enhanced usual care. Strategies to Reduce Injuries and Develop Confidence in Elders (STRIDE) is a 40-month cluster-randomized, parallel-group, superiority, pragmatic trial being conducted at 86 primary care practices in 10 healthcare systems across USA. The 86 practices were randomized to intervention or control group using covariate-based constrained randomization, stratified by healthcare system. Participants are community-living persons, ≥70 years, at increased risk for serious fall injuries. The intervention is a co-management model in which a nurse Falls Care Manager performs multifactorial risk assessments, develops individualized care plans, which include surveillance, follow-up evaluation, and intervention strategies. Control group receives enhanced usual care, with clinicians and patients receiving evidence-based information on falls prevention. Primary outcome is serious fall injuries, operationalized as those leading to medical attention (non-vertebral fractures, joint dislocation, head injury, lacerations, and other major sequelae). Secondary outcomes include all fall injuries, all falls, and well-being (concern for falling; anxiety and depressive symptoms; physical function and disability). Target sample size was 5,322 participants to provide 90% power to detect 20% reduction in primary outcome rate relative to control. Trial enrolled 5451 subjects in 20 months. Intervention and follow-up are ongoing. The findings of the STRIDE study will have important clinical and policy implications for the prevention of fall injuries in older adults. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Rationale, design, and baseline characteristics of 2 large, simple, randomized trials evaluating telmisartan, ramipril, and their combination in high-risk patients: the Ongoing Telmisartan Alone and in Combination with Ramipril Global Endpoint Trial/Telmisartan Randomized Assessment Study in ACE Intolerant Subjects with Cardiovascular Disease (ONTARGET/TRANSCEND) trials.

    PubMed

    Teo, Koon; Yusuf, Salim; Sleight, Peter; Anderson, Craig; Mookadam, Farouk; Ramos, Barbara; Hilbrich, Lutz; Pogue, Janice; Schumacher, Helmut

    2004-07-01

    Angiotensin-converting enzyme (ACE) inhibitors reduce mortality, myocardial infarction, stroke, heart failure, need for revascularization, nephropathy, and diabetes and its complications. Although angiotensin-II receptor blockers (ARBs) have been less extensively evaluated, theoretically they may have "protective" effects similar to those of ACE inhibitors, but with better tolerability. Currently, there is uncertainty about the role of ARBs when used alone or in combination with an ACE inhibitor in high-risk populations with controlled hypertension. Primary objectives of the ONgoing Telmisartan Alone and in Combination with Ramipril Global Endpoint Trial (ONTARGET) are to determine if the combination of the ARB telmisartan and the ACE inhibitor ramipril is more effective than ramipril alone, and if telmisartan is at least as effective as ramipril. The Telmisartan Randomized AssessmeNt Study in aCE iNtolerant subjects with cardiovascular Disease (TRANSCEND) will determine if telmisartan is superior to placebo in patients who are intolerant of ACE inhibitors. The primary outcome for both trials is the composite of cardiovascular death, myocardial infarction, stroke, or hospitalization for heart failure. High-risk patients with coronary, peripheral, or cerebrovascular disease or diabetes with end-organ damage are being recruited and followed for 3.5 to 5.5 years in 2 parallel, randomized, double-blind clinical trials. Recruitment from 730 centers in 40 countries for ONTARGET (n = 25,620) was completed in July 2003. For TRANSCEND, 5776 patients (out of a projected total of 6000) have been recruited (by May 10, 2004). Baseline patient characteristics are comparable to the Heart Outcomes Prevention Evaluation (HOPE) trial, the basis of the design of the current study, confirming that patients are at high-risk.

  20. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  1. Displacement and deformation measurement for large structures by camera network

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  2. The effect of cell design and test criteria on the series/parallel performance of nickel cadmium cells and batteries

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Webb, D. A.

    1983-01-01

    Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.

  3. Effectiveness of a smartphone application for improving healthy lifestyles, a randomized clinical trial (EVIDENT II): study protocol.

    PubMed

    Recio-Rodríguez, José I; Martín-Cantera, Carlos; González-Viejo, Natividad; Gómez-Arranz, Amparo; Arietaleanizbeascoa, Maria S; Schmolling-Guinovart, Yolanda; Maderuelo-Fernandez, Jose A; Pérez-Arechaederra, Diana; Rodriguez-Sanchez, Emiliano; Gómez-Marcos, Manuel A; García-Ortiz, Luis

    2014-03-15

    New technologies could facilitate changes in lifestyle and improve public health. However, no large randomized, controlled studies providing scientific evidence of the benefits of their use have been made. The aims of this study are to develop and validate a smartphone application, and to evaluate the effect of adding this tool to a standardized intervention designed to improve adherence to the Mediterranean diet and to physical activity. An evaluation is also made of the effect of modifying habits upon vascular structure and function, and therefore on arterial aging. A randomized, double-blind, multicenter, parallel group clinical trial will be carried out. A total of 1215 subjects under 70 years of age from the EVIDENT trial will be included. Counseling common to both groups (control and intervention) will be provided on adaptation to the Mediterranean diet and on physical activity. The intervention group moreover will receive training on the use of a smartphone application designed to promote a healthy diet and increased physical activity, and will use the application for three months. The main study endpoints will be the changes in physical activity, assessed by accelerometer and the 7-day Physical Activity Recall (PAR) interview, and adaptation to the Mediterranean diet, as evaluated by an adherence questionnaire and a food frequency questionnaire (FFQ). Evaluation also will be made of vascular structure and function based on central arterial pressure, the radial augmentation index, pulse velocity, the cardio-ankle vascular index, and carotid intima-media thickness. Confirmation that the new technologies are useful for promoting healthier lifestyles and that their effects are beneficial in terms of arterial aging will have important clinical implications, and may contribute to generalize their application in favor of improved population health. Clinical Trials.gov Identifier: NCT02016014.

  4. Effect of oil gum massage therapy on common pathogenic oral microorganisms - A randomized controlled trial

    PubMed Central

    Singla, Nishu; Acharya, Shashidhar; Martena, Suganthi; Singla, Ritesh

    2014-01-01

    Objectives: (i) To assess reduction in Streptococcus mutans and Lactobacillus species count in saliva sample after ten minutes of oil gum massage therapy (massage of gingival tissues) per day for three weeks with sesame oil, olive oil, and coconut oil in three different groups of subjects. (ii) To compare the efficacy between three different oils and the “gold standard” chlorhexidine gel. (iii) To assess reduction in gingival scores and plaque scores of study subjects. Materials and Methods: Study design – Single center, parallel design, and triple blind randomized clinical study with four treatment groups. Participants: 32 of the 40 study subjects working as housekeeping personnel at Kasturba Hospital, Manipal; aged 18-55 years completed the three-week study period. Interventions: Subjects were randomly assigned to massage their gingiva everyday for three weeks with sesame oil, olive oil, coconut oil (tests), and Chlorhexidine gel (control). Oral health status and paraffin stimulated saliva samples were obtained at baseline and after three weeks of oil gum massage therapy. Outcome measures: Microbial culture, plaque index, and gingival index. Statistical analysis: Paired t test and Kruskal Wallis test. Results: There was a significant reduction in mean Streptococcus mutans count, Lactobacillus count, plaque scores, and gingival scores in all four groups after the study. However, there was no significant difference found in percentage reduction of these variables between the four groups. Conclusion: These oils can be used as valuable preventive agents in maintaining and improving oral health in low socioeconomic status population. However, it is recommended that further research should be conducted in other populations with a larger sample and longer duration of follow-up period. PMID:25210256

  5. Translating basic behavioral and social science research to clinical application: the EVOLVE mixed methods approach.

    PubMed

    Peterson, Janey C; Czajkowski, Susan; Charlson, Mary E; Link, Alissa R; Wells, Martin T; Isen, Alice M; Mancuso, Carol A; Allegrante, John P; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B

    2013-04-01

    To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease populations. We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM) and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention and then conducted 3 randomized controlled trials with parallel study design. Participants were randomized to combined PA/SA versus an informational control and were followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The 3 randomized controlled trials enrolled 242 participants who had undergone PCI, 258 with ASM, and 256 with HTN (n = 756). Overall, 45.1% of PA/SA participants versus 33.6% of informational control participants achieved successful behavior change (p = .001). In multivariate analysis, PA/SA intervention remained a significant predictor of achieving behavior change (p < .002, odds ratio = 1.66), 95% CI [1.22, 2.27], controlling for baseline negative affect, comorbidity, gender, race/ethnicity, medical events, smoking, and age. The EVOLVE method is a means by which basic behavioral science research can be translated into efficacious interventions for chronic disease populations.

  6. Adult Congenital Heart Disease-Coping And REsilience (ACHD-CARE): Rationale and methodology of a pilot randomized controlled trial.

    PubMed

    Kovacs, Adrienne H; Bandyopadhyay, Mimi; Grace, Sherry L; Kentner, Amanda C; Nolan, Robert P; Silversides, Candice K; Irvine, M Jane

    2015-11-01

    One-third of North American adults with congenital heart disease (CHD) have diagnosable mood or anxiety disorders and most do not receive mental health treatment. There are no published interventions targeting the psychosocial needs of patients with CHD of any age. We describe the development of a group psychosocial intervention aimed at improving the psychosocial functioning, quality of life, and resilience of adults with CHD and the design of a study protocol to determine the feasibility of a potential full-scale randomized controlled trial (RCT). Drawing upon our quantitative and qualitative research, we developed the Adult CHD-Coping And REsilience (ACHD-CARE) intervention and designed a feasibility study that included a 2-parallel arm non-blinded pilot RCT. Eligible participants (CHD, age ≥ 18 years, no planned surgery, symptoms suggestive of a mood and/or anxiety disorder) were randomized to the ACHD-CARE intervention or Usual Care (1:1 allocation ratio). The group intervention was delivered during eight 90-minute weekly sessions. Feasibility will be assessed in the following domains: (i) process (e.g. recruitment and retention), (ii) resources, (iii) management, (iv) scientific outcomes, and (v) intervention acceptability. This study underscores the importance of carefully developing and testing the feasibility of psychosocial interventions in medical populations before moving to full-scale clinical trials. At study conclusion, we will be poised to make one of three determinations for a full-scale RCT: (1) feasible, (2) feasible with modifications, or (3) not feasible. This study will guide the future evaluation and provision of psychosocial treatment for adults with CHD. Copyright © 2015. Published by Elsevier Inc.

  7. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    NASA Astrophysics Data System (ADS)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  8. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  9. Public and private health-care financing with alternate public rationing rules.

    PubMed

    Cuff, Katherine; Hurley, Jeremiah; Mestelman, Stuart; Muller, Andrew; Nuscheler, Robert

    2012-02-01

    We develop a model to analyze parallel public and private health-care financing under two alternative public sector rationing rules: needs-based rationing and random rationing. Individuals vary in income and severity of illness. There is a limited supply of health-care resources used to treat individuals, causing some individuals to go untreated. Insurers (both public and private) must bid to obtain the necessary health-care resources to treat their beneficiaries. Given individuals' willingnesses-to-pay for private insurance are increasing in income, the introduction of private insurance diverts treatment from relatively poor to relatively rich individuals. Further, the impact of introducing parallel private insurance depends on the rationing mechanism in the public sector. We show that the private health insurance market is smaller when the public sector rations according to need than when allocation is random. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Parallel coding of conjunctions in visual search.

    PubMed

    Found, A

    1998-10-01

    Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search.

  11. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  12. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  13. Parallel Processing and Scientific Applications

    DTIC Science & Technology

    1992-11-30

    Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys

  14. Affect school and script analysis versus basic body awareness therapy in the treatment of psychological symptoms in patients with diabetes and high HbA1c concentrations: two study protocols for two randomized controlled trials.

    PubMed

    Melin, Eva O; Svensson, Ralph; Gustavsson, Sven-Åke; Winberg, Agneta; Denward-Olah, Ewa; Landin-Olsson, Mona; Thulesius, Hans O

    2016-04-27

    Depression is linked with alexithymia, anxiety, high HbA1c concentrations, disturbances of cortisol secretion, increased prevalence of diabetes complications and all-cause mortality. The psycho-educational method 'affect school with script analysis' and the mind-body therapy 'basic body awareness treatment' will be trialled in patients with diabetes, high HbA1c concentrations and psychological symptoms. The primary outcome measure is change in symptoms of depression. Secondary outcome measures are changes in HbA1c concentrations, midnight salivary cortisol concentration, symptoms of alexithymia, anxiety, self-image measures, use of antidepressants, incidence of diabetes complications and mortality. Two studies will be performed. Study I is an open-labeled parallel-group study with a two-arm randomized controlled trial design. Patients are randomized to either affect school with script analysis or to basic body awareness treatment. According to power calculations, 64 persons are required in each intervention arm at the last follow-up session. Patients with type 1 or type 2 diabetes were recruited from one hospital diabetes outpatient clinic in 2009. The trial will be completed in 2016. Study II is a multicentre open-labeled parallel-group three-arm randomized controlled trial. Patients will be randomized to affect school with script analysis, to basic body awareness treatment, or to treatment as usual. Power calculations show that 70 persons are required in each arm at the last follow-up session. Patients with type 2 diabetes will be recruited from primary care. This study will start in 2016 and finish in 2023. For both studies, the inclusion criteria are: HbA1c concentration ≥62.5 mmol/mol; depression, alexithymia, anxiety or a negative self-image; age 18-59 years; and diabetes duration ≥1 year. The exclusion criteria are pregnancy, severe comorbidities, cognitive deficiencies or inadequate Swedish. Depression, anxiety, alexithymia and self-image are assessed using self-report instruments. HbA1c concentration, midnight salivary cortisol concentration, blood pressure, serum lipid concentrations and anthropometrics are measured. Data are collected from computerized medical records and the Swedish national diabetes and causes of death registers. Whether the "affect school with script analysis" will reduce psychological symptoms, increase emotional awareness and improve diabetes related factors will be tried, and compared to "basic body awareness treatment" and treatment as usual. ClinicalTrials.gov: NCT01714986.

  15. A two-site, two-arm, 34-week, double-blind, parallel-group, randomized controlled trial of reduced nicotine cigarettes in smokers with mood and/or anxiety disorders: trial design and protocol.

    PubMed

    Allen, Sophia I; Foulds, Jonathan; Pachas, Gladys N; Veldheer, Susan; Cather, Corinne; Azzouz, Nour; Hrabovsky, Shari; Hameed, Ahmad; Yingst, Jessica; Hammett, Erin; Modesto, Jennifer; Krebs, Nicolle M; Zhu, Junjia; Liao, Jason; Muscat, Joshua E; Richie, John; Evins, A Eden

    2017-01-19

    The U.S. Food and Drug Administration can set standards for cigarettes that could include reducing their nicotine content. Such a standard should improve public health without causing unintended serious consequences for sub-populations. This study evaluates the effect of progressive nicotine reduction in cigarettes on smoking behavior, toxicant exposure, and psychiatric symptoms in smokers with comorbid mood and/or anxiety disorders using a two-site, two-arm, double-blind, parallel group, randomized controlled trial (RCT) in four phases over 34 weeks. Adult smokers (N = 200) of 5 or more cigarettes per day will be randomized across two sites (Penn State and Massachusetts General). Participants must have not had a quit attempt in the prior month, nor be planning to quit in the next 6 months, meet criteria for a current or lifetime unipolar mood and/or anxiety disorder based on the structured Mini-International Neuropsychiatric Interview, and must not have an unstable medical or psychiatric condition. After a week of smoking their own cigarettes, participants receive two weeks of Spectrum research cigarettes with usual nicotine content (11.6 mg). After this baseline period, participants will be randomly assigned to continue smoking Spectrum research cigarettes that contain either (a) Usual Nicotine Content (11.6 mg); or (b) Reduced Nicotine Content: the nicotine content per cigarette is progressively reduced from approximately 11.6 mg to 0.2 mg in five steps over 18 weeks. At the end of the randomization phase, participants will be offered the choice to either (a) quit smoking with assistance, (b) continue smoking free research cigarettes, or (c) return to purchasing their own cigarettes, for the final 12 weeks of the study. The primary outcome measure is blood cotinine; key secondary outcomes are: exhaled carbon monoxide, urinary total NNAL- 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol and 1-hydroxypyrene, oxidative stress biomarkers including 8-isoprostanes, measures of psychiatric symptoms (e.g., depression, anxiety), smoking behavior and dependence (e.g., cigarette consumption, quit attempts), and health effects (e.g., blood pressure, respiratory symptoms). Results from this study will inform FDA on the potential effects of regulating the nicotine content of cigarettes and help determine whether smokers with mood and/or anxiety disorders can safely transition to significantly reduced nicotine content cigarettes. TRN: NCT01928758 , registered August 21, 2013.

  16. Protocol of the impact of alternative social assistance disbursement on drug-related harm (TASA) study: a randomized controlled trial to evaluate changes to payment timing and frequency among people who use illicit drugs.

    PubMed

    Richardson, Lindsey; Laing, Allison; Milloy, M-J; Maynard, Russ; Nosyk, Bohdan; Marshall, Brandon; Grafstein, Eric; Daly, Patricia; Wood, Evan; Montaner, Julio; Kerr, Thomas

    2016-07-29

    Government social assistance payments seek to alleviate poverty and address survival needs, but their monthly disbursement may cue increases in illicit drug use. This cue may be magnified when assistance is disbursed simultaneously across the population. Synchronized payments have been linked to escalations in drug use and unintended but severe drug-related harms, including overdose, as well as spikes in demand for health, social, financial and police services. The TASA study examines whether changing payment timing and frequency can mitigate drug-related harm associated with synchronized social assistance disbursement. The study is a parallel arm multi-group randomized controlled trial in which 273 participants are randomly allocated for six assistance cycles to a control or one of two intervention arms on a 1:1:1 basis. Intervention arm participants receive their payments: (1) monthly; or (2) semi-monthly, in each case on days that are not during the week when cheques are normally issued. The study partners with a community-based credit union that has developed a system to vary social assistance payment timing. The primary outcome is a 40 % increase in drug use during the 3 days beginning with cheque issue day compared to other days of the month. Bi-weekly follow-up interviews collect participant information on this and secondary outcomes of interest, including drug-related harm (e.g. non-fatal overdose), exposure to violence and health service utilization. Self-reported data will be supplemented with participant information from health, financial, police and government administrative databases. A longitudinal, nested, qualitative parallel process evaluation explores participant experiences, and a cost-effectiveness evaluation of different disbursement scenarios will be undertaken. Outcomes will be compared between control and intervention arms to identify the impacts of alternative disbursement schedules on drug-related harm resulting from synchronized income assistance. This structural RCT benefits from strong community partnerships, highly detailed outcome measurement, robust methods of randomization and data triangulation with third party administrative databases. The study will provide evidence regarding the potential importance of social assistance program design as a lever to support population health outcomes and service provision for populations with a high prevalence of substance use. NCT02457949 Registered 13 May 2015.

  17. Effective components of feedback from Routine Outcome Monitoring (ROM) in youth mental health care: study protocol of a three-arm parallel-group randomized controlled trial

    PubMed Central

    2014-01-01

    Background Routine Outcome Monitoring refers to regular measurements of clients’ progress in clinical practice, aiming to evaluate and, if necessary, adapt treatment. Clients fill out questionnaires and clinicians receive feedback about the results. Studies concerning feedback in youth mental health care are rare. The effects of feedback, the importance of specific aspects of feedback, and the mechanisms underlying the effects of feedback are unknown. In the present study, several potentially effective components of feedback from Routine Outcome Monitoring in youth mental health care in the Netherlands are investigated. Methods/Design We will examine three different forms of feedback through a three-arm parallel-group randomized controlled trial. 432 children and adolescents (aged 4 to 17 years) and their parents, who have been referred to mental health care institution Pro Persona, will be randomly assigned to one of three feedback conditions (144 participants per condition). Randomization will be stratified by age of the child or adolescent and by department. All participants fill out questionnaires at the start of treatment, one and a half months after the start of treatment, every three months during treatment, and at the end of treatment. Participants in the second and third feedback conditions fill out an additional questionnaire. In condition 1, clinicians receive basic feedback regarding clients’ symptoms and quality of life. In condition 2, the feedback of condition 1 is extended with feedback regarding possible obstacles to a good outcome and with practical suggestions. In condition 3, the feedback of condition 2 is discussed with a colleague while following a standardized format for case consultation. The primary outcome measure is symptom severity and secondary outcome measures are quality of life, satisfaction with treatment, number of sessions, length of treatment, and rates of dropout. We will also examine the role of being not on track (not responding to treatment). Discussion This study contributes to the identification of effective components of feedback and a better understanding of how feedback functions in real-world clinical practice. If the different feedback components prove to be effective, this can help to support and improve the care for youth. Trial registration Dutch Trial Register NTR4234 PMID:24393491

  18. Scalable descriptive and correlative statistics with Titan.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Pebay, Philippe Pierre

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  19. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  20. A design concept of parallel elasticity extracted from biological muscles for engineered actuators.

    PubMed

    Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie

    2016-08-23

    Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.

  1. Mechanisms for an effect of acetylcysteine on renal function after exposure to radio-graphic contrast material: study protocol

    PubMed Central

    2012-01-01

    Background Contrast-induced nephropathy is a common complication of contrast administration in patients with chronic kidney disease and diabetes. Its pathophysiology is not well understood; similarly the role of intravenous or oral acetylcysteine is unclear. Randomized controlled trials to date have been conducted without detailed knowledge of the effect of acetylcysteine on renal function. We are conducting a detailed mechanistic study of acetylcysteine on normal and impaired kidneys, both with and without contrast. This information would guide the choice of dose, route, and appropriate outcome measure for future clinical trials in patients with chronic kidney disease. Methods/Design We designed a 4-part study. We have set up randomised controlled cross-over studies to assess the effect of intravenous (50 mg/kg/hr for 2 hrs before contrast exposure, then 20 mg/kg/hr for 5 hrs) or oral acetylcysteine (1200 mg twice daily for 2 days, starting the day before contrast exposure) on renal function in normal and diseased kidneys, and normal kidneys exposed to contrast. We have also set up a parallel-group randomized controlled trial to assess the effect of intravenous or oral acetylcysteine on patients with chronic kidney disease stage III undergoing elective coronary angiography. The primary outcome is change in renal blood flow; secondary outcomes include change in glomerular filtration rate, tubular function, urinary proteins, and oxidative balance. Discussion Contrast-induced nephropathy represents a significant source of hospital morbidity and mortality. Over the last ten years, acetylcysteine has been administered prior to contrast to reduce the risk of contrast-induced nephropathy. Randomized controlled trials, however, have not reliably demonstrated renoprotection; a recent large randomized controlled trial assessing a dose of oral acetylcysteine selected without mechanistic insight did not reduce the incidence of contrast-induced nephropathy. Our study should reveal the mechanism of effect of acetylcysteine on renal function and identify an appropriate route for future dose response studies and in time randomized controlled trials. Trial registration Clinical Trials.gov: NCT00558142; EudraCT: 2006-003509-18. PMID:22305183

  2. PREDOMOS study, impact of a social intervention program for socially isolated elderly cancer patients: study protocol for a randomized controlled trial.

    PubMed

    Crétel-Durand, Elodie; Nouguerède, Emilie; Le Caer, Hervé; Rousseau, Frédérique; Retornaz, Frédérique; Guillem, Olivier; Couderc, Anne-Laure; Greillier, Laurent; Norguet, Emmanuelle; Cécile, Maud; Boulahssass, Rabia; Le Caer, Francoise; Tournier, Sandrine; Butaud, Chantal; Guillet, Pierre; Nahon, Sophie; Poudens, Laure; Kirscher, Sylvie; Loubière, Sandrine; Diaz, Nadine; Dhorne, Jean; Auquier, Pascal; Baumstarck, Karine

    2017-04-12

    Cancer incidence and social isolation increase along with advanced age, and social isolation potentiates the relative risk of death by cancer. Once spotted, social isolation can be averted with the intervention of a multidisciplinary team. Techniques of automation and remote assistance have already demonstrated their positive impact on falls prevention and quality of life (QoL), though little is known about their impact on socially isolated elderly patients supported for cancer. The primary objective of the PREDOMOS study is to evaluate the impact of establishing a Program of Social intervention associated with techniques of Domotic and Remote assistance (PS-DR) on the improvement of QoL of elderly isolated patients, treated for locally advanced or metastatic cancer. The secondary objectives include treatment failure, tolerance, survival, and autonomy. This trial is a multicenter, prospective, randomized, placebo-controlled, open-label, two-parallel group study. The setting is 10 French oncogeriatric centers. Inclusion criteria are patients aged at least 70 years with a social isolation risk and a histological diagnosis of cancer, locally advanced or metastatic disease. The groups are (1) the control group, receiving usual care; (2) the experimental group, receiving usual care associating with monthly social assistance, domotic, and remote assistance. Participants are randomized in a 1:1 allocation ratio. Evaluation times involve inclusion (randomization) and follow-up (12 months). The primary endpoint is QoL at 3 months (via European Organization for Research and Treatment of Cancer (EORTC) QLQ C30); secondary endpoints are social isolation, time to treatment failure, toxicity, dose response-intensity, survival, autonomy, and QoL at 6 months. For the sample size, 320 individuals are required to obtain 90% power to detect a 10-point difference (standard deviation 25) in QoL score between the two groups (20% loss to follow-up patients expected). The randomized controlled design is the most appropriate design to demonstrate the efficacy of a new experimental strategy (Evidence-Based Medicine Working Group classification). National and international recommendations could be updated based on the findings of this study. ClinicalTrials.gov, NCT02829762 . Registered on 29 June 2016.

  3. SPACE-2: A Missed Opportunity to Compare Carotid Endarterectomy, Carotid Stenting, and Best Medical Treatment in Patients with Asymptomatic Carotid Stenoses.

    PubMed

    Eckstein, H-H; Reiff, T; Ringleb, P; Jansen, O; Mansmann, U; Hacke, W

    2016-06-01

    Because of recent advances in best medical treatment (BMT), it is currently unclear whether any additional surgical or endovascular interventions confer additional benefit, in terms of preventing late ipsilateral carotid territory ischemic stroke in asymptomatic patients with significant carotid stenoses. The aim was to compare the stroke-preventive effects of BMT alone, with that of BMT in combination with carotid endarterectomy (CEA) or carotid artery stenting (CAS) in patients with high grade asymptomatic extracranial carotid artery stenosis. SPACE-2 was planned as a three-armed, randomized controlled trial (BMT alone vs. CEA plus BMT vs. CAS plus BMT, ISRCTN 78592017). However, because of slow patient recruitment, the three-arm study design was amended (July 2013) to become two parallel randomized studies (BMT alone vs. CEA plus BMT, and BMT alone vs. CAS plus BMT). The change in study design did not lead to any significant increase in patient recruitment, and trial recruitment ceased after recruiting 513 patients over a 5 year period (CEA vs. BMT (n = 203); CAS vs. BMT (n = 197), and BMT alone (n = 113)). The 30 day rate of death/stroke was 1.97% for patients undergoing CEA, and 2.54% for patients undergoing CAS. No strokes or deaths occurred in the first 30 days after randomization in patients randomized to BMT. There were several potential reasons for the low recruitment rates into SPACE-2, including the ability for referring doctors to refer their patients directly for CEA or CAS outwith the trial, an inability to convince patients (who had come "mentally prepared" that an intervention was necessary) to accept BMT, and other economic constraints. Because of slow recruitment rates, SPACE-2 had to be stopped after randomizing only 513 patients. The German Research Foundation will provide continued funding to enable follow up of all recruited patients, and it is also planned to include these data in any future meta-analysis prepared by the Carotid Stenosis Trialists Collaboration. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Dynamic Resource Management for Parallel Tasks in an Oversubscribed Energy-Constrained Heterogeneous Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Koenig, Gregory A; Machovec, Dylan

    2016-01-01

    Abstract: The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of such a system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 hours). To maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy-constrained, we have designed, analyzed, and compared different heuristic techniques. Four utility-aware heuristics (i.e.,more » Max Utility, Max Utility-per-Time, Max Utility-per-Resource, and Max Utility-per-Energy), three FCFS-based heuristics (Conservative Backfilling, EASY Backfilling, and FCFS with Multiple Queues), and a Random heuristic were examined in this study. A technique that is often used with the FCFS-based heuristics is the concept of a permanent reservation. We compare the performance of permanent reservations with temporary place-holders to demonstrate the advantages that place-holders can provide. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in an energy-constrained oversubscribed HPC environment. With place-holders, energy filtering, and dropping tasks with low potential utility, our utility-aware heuristics are able to significantly outperform the existing FCFS-based techniques.« less

  5. Long-term survival of endodontically treated, maxillary anterior teeth restored with either tapered or parallel-sided glass-fiber posts and full-ceramic crown coverage.

    PubMed

    Signore, Antonio; Benedicenti, Stefano; Kaitsas, Vassilios; Barone, Michele; Angiero, Francesca; Ravera, Giambattista

    2009-02-01

    This retrospective study investigated the clinical effectiveness over up to 8 years of parallel-sided and of tapered glass-fiber posts, in combination with either hybrid composite or dual-cure composite resin core material, in endodontically treated, maxillary anterior teeth covered with full-ceramic crowns. The study population comprised 192 patients and 526 endodontically treated teeth, with various degrees of hard-tissue loss, restored by the post-and-core technique. Four groups were defined based on post shape and core build-up materials, and within each group post-and-core restorations were assigned randomly with respect to root morphology. Inclusion criteria were symptom-free endodontic therapy, root-canal treatment with a minimum apical seal of 4mm, application of rubber dam, need for post-and-core complex because of coronal tooth loss, and tooth with at least one residual coronal wall. Survival rate of the post-and-core restorations was determined using Kaplan-Meier statistical analysis. The restorations were examined clinically and radiologically; mean observation period was 5.3 years. The overall survival rate of glass-fiber post-and-core restorations was 98.5%. The survival rate for parallel-sided posts was 98.6% and for tapered posts was 96.8%. Survival rates for core build-up materials were 100% for dual-cure composite and 96.8% for hybrid light-cure composite. For both glass-fiber post designs and for both core build-up materials, clinical performance was satisfactory. Survival was higher for teeth retaining four and three coronal walls.

  6. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  7. MARBLE: A system for executing expert systems in parallel

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Johnson, Coe; Johnson, Dean

    1990-01-01

    This paper details the MARBLE 2.0 system which provides a parallel environment for cooperating expert systems. The work has been done in conjunction with the development of an intelligent computer-aided design system, ICADS, by the CAD Research Unit of the Design Institute at California Polytechnic State University. MARBLE (Multiple Accessed Rete Blackboard Linked Experts) is a system of C Language Production Systems (CLIPS) expert system tool. A copied blackboard is used for communication between the shells to establish an architecture which supports cooperating expert systems that execute in parallel. The design of MARBLE is simple, but it provides support for a rich variety of configurations, while making it relatively easy to demonstrate the correctness of its parallel execution features. In its most elementary configuration, individual CLIPS expert systems execute on their own processors and communicate with each other through a modified blackboard. Control of the system as a whole, and specifically of writing to the blackboard is provided by one of the CLIPS expert systems, an expert control system.

  8. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  9. Ecologically optimizing exercise maintenance in men and women post-cardiac rehabilitation: Protocol for a randomized controlled trial of efficacy with economics (ECO-PCR).

    PubMed

    Reid, Robert; Blanchard, Chris M; Wooding, Evyanne; Harris, Jennifer; Krahn, Murray; Pipe, Andrew; Chessex, Caroline; Grace, Sherry L

    2016-09-01

    Exercise-based cardiac rehabilitation (CR) participation results in increased cardio-metabolic fitness, which is associated with reduced mortality. However, many graduates fail to maintain exercise post-program. ECO-PCR investigates the efficacy and cost-effectiveness of a social ecologically-based intervention to increase long-term exercise maintenance following the completion of CR. A three-site, 2-group, parallel randomized controlled trial is underway. 412 male and 192 female (N=604) supervised CR participants are being recruited just before CR graduation. Participants are randomized (1:1 concealed allocation) to intervention or usual care. A 50-week exercise facilitator intervention has been designed to assist CR graduates in the transition from structured, supervised exercise to self-managed home- or community-based (e.g., Heart Wise Exercise programs) exercise. The intervention consists of 8 telephone contacts over the 50week period: 3 individual and 5 group. Assessments occur at CR graduation, and 26, 52 and 78weeks post-randomization. The primary outcome is change in minutes of accelerometer-measured moderate to vigorous-intensity physical activity (MVPA) from CR graduation to 52weeks post-randomization. Secondary measures include exercise capacity, quality of life, and cardiovascular risk factors. Analyses will be undertaken based on intention-to-treat. For the primary outcome, an analysis of variance will be computed to test the change in minutes of MVPA in each group between CR graduation and 52week follow-up (2 [arm]×2 [time]). Secondary objectives will be assessed using mixed-model repeated measures analyses to compare differences between groups over time. Mean costs and quality-adjusted life years for each arm will be estimated. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The hemodynamic effects of intravenous paracetamol (acetaminophen) vs normal saline in cardiac surgery patients: A single center placebo controlled randomized study

    PubMed Central

    Churilov, Leonid

    2018-01-01

    The hemodynamic effects of intravenous (IV) paracetamol in patients undergoing cardiac surgery are unknown. We performed a prospective single center placebo controlled randomized study with parallel group design in adult patients undergoing elective cardiac surgery. Participants received paracetamol (1 gram) IV or placebo (an equal volume of 0.9% saline) preoperatively followed by two postoperative doses 6 hours apart. The primary endpoint was the absolute change in systolic (SBP) 30 minutes after the preoperative infusion, analysed using an ANCOVA model. Secondary endpoints included absolute changes in mean arterial pressure (MAP) and diastolic blood pressure (DPB), and other key hemodynamic variables after each infusion. All other endpoints were analysed using random-effect generalized least squares regression modelling with individual patients treated as random effects. Fifty participants were randomly assigned to receive paracetamol (n = 25) or placebo (n = 25). Post preoperative infusion, paracetamol decreased SBP by a mean (SD) of 13 (18) mmHg, p = 0.02, compared to a mean (SD) of 1 (11) mmHg with saline. Paracetamol decreased MAP and DBP by a mean (SD) of 9 (12) mmHg and 8 (9) mmHg (p = 0.01 and 0.02), respectively, compared to a mean (SD) of 1 (8) mmHg and 0 (6) mmHg with placebo. Postoperatively, there were no significant differences in pressure or flow based hemodynamic parameters in both groups. This study provides high quality evidence that the administration of IV paracetamol in patients undergoing cardiac surgery causes a transient decrease in preoperative blood pressure when administered before surgery but no adverse hemodynamic effects when administered in the postoperative setting. PMID:29659631

  11. The hemodynamic effects of intravenous paracetamol (acetaminophen) vs normal saline in cardiac surgery patients: A single center placebo controlled randomized study.

    PubMed

    Chiam, Elizabeth; Bellomo, Rinaldo; Churilov, Leonid; Weinberg, Laurence

    2018-01-01

    The hemodynamic effects of intravenous (IV) paracetamol in patients undergoing cardiac surgery are unknown. We performed a prospective single center placebo controlled randomized study with parallel group design in adult patients undergoing elective cardiac surgery. Participants received paracetamol (1 gram) IV or placebo (an equal volume of 0.9% saline) preoperatively followed by two postoperative doses 6 hours apart. The primary endpoint was the absolute change in systolic (SBP) 30 minutes after the preoperative infusion, analysed using an ANCOVA model. Secondary endpoints included absolute changes in mean arterial pressure (MAP) and diastolic blood pressure (DPB), and other key hemodynamic variables after each infusion. All other endpoints were analysed using random-effect generalized least squares regression modelling with individual patients treated as random effects. Fifty participants were randomly assigned to receive paracetamol (n = 25) or placebo (n = 25). Post preoperative infusion, paracetamol decreased SBP by a mean (SD) of 13 (18) mmHg, p = 0.02, compared to a mean (SD) of 1 (11) mmHg with saline. Paracetamol decreased MAP and DBP by a mean (SD) of 9 (12) mmHg and 8 (9) mmHg (p = 0.01 and 0.02), respectively, compared to a mean (SD) of 1 (8) mmHg and 0 (6) mmHg with placebo. Postoperatively, there were no significant differences in pressure or flow based hemodynamic parameters in both groups. This study provides high quality evidence that the administration of IV paracetamol in patients undergoing cardiac surgery causes a transient decrease in preoperative blood pressure when administered before surgery but no adverse hemodynamic effects when administered in the postoperative setting.

  12. Impact of a Daily SMS Medication Reminder System on Tuberculosis Treatment Outcomes: A Randomized Controlled Trial.

    PubMed

    Mohammed, Shama; Glennerster, Rachel; Khan, Aamir J

    2016-01-01

    The rapid uptake of mobile phones in low and middle-income countries over the past decade has provided public health programs unprecedented access to patients. While programs have used text messages to improve medication adherence, there have been no high-powered trials evaluating their impact on tuberculosis treatment outcomes. To measure the impact of Zindagi SMS, a two-way SMS reminder system, on treatment success of people with drug-sensitive tuberculosis. We conducted a two-arm, parallel design, effectiveness randomized controlled trial in Karachi, Pakistan. Individual participants were randomized to either Zindagi SMS or the control group. Zindagi SMS sent daily SMS reminders to participants and asked them to respond through SMS or missed (unbilled) calls after taking their medication. Non-respondents were sent up to three reminders a day. Public and private sector tuberculosis clinics in Karachi, Pakistan. Newly-diagnosed patients with smear or bacteriologically positive pulmonary tuberculosis who were on treatment for less than two weeks; 15 years of age or older; reported having access to a mobile phone; and intended to live in Karachi throughout treatment were eligible to participate. We enrolled 2,207 participants, with 1,110 randomized to Zindagi SMS and 1,097 to the control group. The primary outcome was clinically recorded treatment success based upon intention-to-treat. We found no significant difference between the Zindagi SMS or control groups for treatment success (719 or 83% vs. 903 or 83%, respectively, p = 0·782). There was no significant program effect on self-reported medication adherence reported during unannounced visits during treatment. In this large-scale randomized controlled effectiveness trial of SMS medication reminders for tuberculosis treatment, we found no significant impact. The trial was registered with ClinicalTrials.gov, NCT01690754.

  13. One-Year Follow-Up of the Effectiveness of Cognitive Behavioral Group Therapy for Patients' Depression: A Randomized, Single-Blinded, Controlled Study.

    PubMed

    Chiang, Kai-Jo; Chen, Tsai-Hui; Hsieh, Hsiu-Tsu; Tsai, Jui-Chen; Ou, Keng-Liang; Chou, Kuei-Ru

    2015-01-01

    The aim of the study was to investigate the long-term (one year) effectiveness of a 12-session weekly cognitive behavior group therapy (CBGT) on patients with depression. This was a single-blind randomized controlled study with a 2-arm parallel group design. Eighty-one subjects were randomly assigned to 12 sessions intervention group (CBGT) or control group (usual outpatient psychiatric care group) and 62 completed the study. The primary outcome was depression measured with Beck Depression Inventory (BDI-II) and Hamilton Rating Scale for Depression (HRSD). The secondary outcomes were automatic thoughts measured by automatic thoughts questionnaire (ATQ). Both groups were evaluated at the pretest (before 2 weeks), posttest (after 12 therapy sessions), and short- (3 months), medium- (6 months), and long-term (12 months) follow-up. After receiving CBGT, the experimental group had a statistically significant reduction in the BDI-II from 40.30 at baseline to 17.82 points at session eight and to 10.17 points at postintervention (P < 0.001). Similar effects were seen on the HRSD. ATQ significantly decreased at the 12th session, 6 months after sessions, and 1 year after the sessions ended (P < 0.001). We concluded that CBGT is effective for reducing depression and continued to be effective at 1 year of follow-up.

  14. Effects of tactile-kinesthetic stimulation on low birth weight neonates.

    PubMed

    Aliabadi, Faranak; Askary, Reihaneh K

    2013-06-01

    Low Birth Weight [LBW] (1500gr ≤ Birth Weight ≤ 2499 gr) is one of the most serious health problems in neonates. These neonates need complementary interventions (e.g. tactile-kinesthetic stimulation) to promote development. This study was conducted to determine the effect of Tactile-Kinesthetic Stimulation (TKS) on physical and behavioral development of Low Birth Weight neonates. This was a randomized controlled trial with equal randomization (1:1 for two groups) and parallel group design. Forty LBW neonates were randomly allocated into test (n = 20) and control (n = 20) groups. TKS was provided for three 15 minute periods per day for 10 consecutive days to the test group, with the massages consisting of moderate pressure strokes in supine and prone position and kinesthetic exercises consisting of flexion and extension of limbs. All measurements were taken before and after completion of the study with the same equipment (Philips electronic weighing scale with an accuracy of ±5 grams and Brazelton Neonatal Behavioral Assessment) and by the same person. There was a trend towards increased daily weight gain, but without statistical significance. On the Brazelton scale, the test group showed statistically significant improved scores on the 'motor' (P-value <0.001) and 'regulation of state' (P-value = 0.039) clusters after the 10 days TKS. TKS has no adverse effects on physiologic parameters and gives better adaptive behavior of LBW neonates compared to those without TKS.

  15. Eight Weeks of Cosmos caudatus (Ulam Raja) Supplementation Improves Glycemic Status in Patients with Type 2 Diabetes: A Randomized Controlled Trial

    PubMed Central

    Cheng, Shi-Hui; Ismail, Amin; Anthony, Joseph; Ng, Ooi Chuan; Hamid, Azizah Abdul; Barakatun-Nisak, Mohd Yusof

    2015-01-01

    Objectives. Optimizing glycemic control is crucial to prevent type 2 diabetes related complications. Cosmos caudatus is reported to have promising effect in improving plasma blood glucose in an animal model. However, its impact on human remains ambiguous. This study was carried out to evaluate the effectiveness of C. caudatus on glycemic status in patients with type 2 diabetes. Materials and Methods. In this randomized controlled trial with two-arm parallel-group design, a total of 101 subjects with type 2 diabetes were randomly allocated to diabetic-ulam or diabetic controls for eight weeks. Subjects in diabetic-ulam group consumed 15 g of C. caudatus daily for eight weeks while diabetic controls abstained from taking C. caudatus. Both groups received the standard lifestyle advice. Results. After 8 weeks of supplementation, C. caudatus significantly reduced serum insulin (−1.16 versus +3.91), reduced HOMA-IR (−1.09 versus +1.34), and increased QUICKI (+0.05 versus −0.03) in diabetic-ulam group compared with the diabetic controls. HbA1C level was improved although it is not statistically significant (−0.76% versus −0.37%). C. caudatus was safe to consume. Conclusions. C. caudatus supplementation significantly improves insulin resistance and insulin sensitivity in patients with type 2 diabetes. PMID:26713097

  16. Effectiveness of peer-led dissonance-based eating disorder prevention groups: results from two randomized pilot trials.

    PubMed

    Stice, Eric; Rohde, Paul; Durant, Shelley; Shaw, Heather; Wade, Emily

    2013-05-01

    The present preliminary trials tested whether undergraduate peer leaders can effectively deliver a dissonance-based eating disorder prevention program, which could facilitate broad dissemination of this efficacious intervention. In Study 1, female undergraduates (N=171) were randomized to peer-led groups, clinician-led groups, or an educational brochure control condition. In Study 2, which improved a design limitation of Study 1 by using completely parallel outcome measures across conditions, female undergraduates (N=148) were randomized to either immediate peer-led groups or a waitlist control condition. In Study 1, participants in peer- and clinician-led groups showed significantly greater pre-post reductions in risk factors and eating disorder symptoms than controls (M d=.64 and .98 respectively), though clinician- versus peer-led groups had higher attendance and competence ratings, and produced stronger effects at posttest (M d=.32) and at 1-year follow-up (M d=.26). In Study 2, participants in peer-led groups showed greater pre-post reductions in all outcomes than waitlist controls (M d=.75). Results provide novel evidence that dissonance-based eating disorder prevention groups led by undergraduate peers are feasible and produce greater reductions in eating disorder risk factors and symptoms than minimal-intervention control conditions, but indicate that effects are smaller for peer- versus clinician-led groups. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Incomplete caries removal and indirect pulp capping in primary molars: a randomized controlled trial.

    PubMed

    Bressani, Ana Eliza Lemes; Mariath, Adriela Azevedo Souza; Haas, Alex Nogueira; Garcia-Godoy, Franklin; de Araujo, Fernando Borba

    2013-08-01

    To compare the effect of incomplete caries removal (ICR) and indirect pulp capping (IPC) with calcium hydroxide (CH) or an inert material (wax) on color, consistency and contamination of the remaining dentin of primary molars. This double-blind, parallel-design, randomized controlled trial included 30 children presenting one primary molar with deep caries lesion. Children were randomly assigned after ICR to receive IPC with CH or wax. All teeth were then restored with resin composite. Baseline dentin color and consistency were evaluated after ICR, and dentin samples were collected for contamination analyses using scanning electron microscopy. After 3 months, restorations were removed and the three parameters were re-evaluated. In both groups, dentin became significantly darker after 3 months. No cases of yellow dentin were observed after 3 months with CH compared to 33.3% of the wax cases (P < 0.05). A statistically significant difference over time was observed only for CH regarding consistency. CH stimulated a dentin hardening process in a statistically higher number of cases than wax (86.7% vs. 33.3%; P = 0.008). Contamination changed significantly over time in CH and wax without significant difference between groups. It was concluded that CH and wax arrested the carious process of the remaining carious dentin after indirect pulp capping, but CH showed superior dentin color and consistency after 3 months.

  18. Effect of Two Frequencies of Whole-Body Vibration Training on Balance and Flexibility of the Elderly: A Randomized Controlled Trial.

    PubMed

    Tseng, Shiuan-Yu; Hsu, Pi-Shan; Lai, Chung-Liang; Liao, Wan-Chun; Lee, Meng-Chih; Wang, Chun-Hou

    2016-10-01

    The aim of this study was to investigate the effects of whole-body vibration training with different frequencies on the balance and flexibility of the healthy elderly. The participants were recruited from hospital volunteers and the community; all of them were healthy subjects, all over 65 years of age. The study involved three randomized groups in a parallel and single-blind design. The main outcome variables included the limits of stability test and the sit and reach test, which were measured at pre-training, Month 1 (Mid-training), Month 3 (Post-training), and Month 6 (Follow-up). A total of 45 subjects, with a mean age of 69.6 ± 3.9 years, were randomly divided into three groups. There was significant interaction in the performance of the limits of stability and sit and reach tests in the different groups at the four different time points (F = 25.218, P < 0.001, F = 12.235, P < 0.001, respectively). There was a significant difference in balance performance between the vibration groups at the frequencies of 20 Hz and 40 Hz and the control group at Month 1, Month 3, and Month 6 (P < 0.001). Whole-body vibration training at 20 Hz has significant benefit to the balance and flexibility of the elderly who do not engage in habitual exercise.

  19. Multiprocessor graphics computation and display using transputers

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A package of two-dimensional graphics routines was developed to run on a transputer-based parallel processing system. These routines were designed to enable applications programmers to easily generate and display results from the transputer network in a graphic format. The graphics procedures were designed for the lowest possible network communication overhead for increased performance. The routines were designed for ease of use and to present an intuitive approach to generating graphics on the transputer parallel processing system.

  20. Transmission Index Research of Parallel Manipulators Based on Matrix Orthogonal Degree

    NASA Astrophysics Data System (ADS)

    Shao, Zhu-Feng; Mo, Jiao; Tang, Xiao-Qiang; Wang, Li-Ping

    2017-11-01

    Performance index is the standard of performance evaluation, and is the foundation of both performance analysis and optimal design for the parallel manipulator. Seeking the suitable kinematic indices is always an important and challenging issue for the parallel manipulator. So far, there are extensive studies in this field, but few existing indices can meet all the requirements, such as simple, intuitive, and universal. To solve this problem, the matrix orthogonal degree is adopted, and generalized transmission indices that can evaluate motion/force transmissibility of fully parallel manipulators are proposed. Transmission performance analysis of typical branches, end effectors, and parallel manipulators is given to illustrate proposed indices and analysis methodology. Simulation and analysis results reveal that proposed transmission indices possess significant advantages, such as normalized finite (ranging from 0 to 1), dimensionally homogeneous, frame-free, intuitive and easy to calculate. Besides, proposed indices well indicate the good transmission region and relativity to the singularity with better resolution than the traditional local conditioning index, and provide a novel tool for kinematic analysis and optimal design of fully parallel manipulators.

  1. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  2. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  3. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  4. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  5. Design of a MIMD neural network processor

    NASA Astrophysics Data System (ADS)

    Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.

    1994-03-01

    The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.

  6. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  7. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  8. Parallel transmission RF pulse design with strict temperature constraints.

    PubMed

    Deniz, Cem M; Carluccio, Giuseppe; Collins, Christopher

    2017-05-01

    RF safety in parallel transmission (pTx) is generally ensured by imposing specific absorption rate (SAR) limits during pTx RF pulse design. There is increasing interest in using temperature to ensure safety in MRI. In this work, we present a local temperature correlation matrix formalism and apply it to impose strict constraints on maximum absolute temperature in pTx RF pulse design for head and hip regions. Electromagnetic field simulations were performed on the head and hip of virtual body models. Temperature correlation matrices were calculated for four different exposure durations ranging between 6 and 24 min using simulated fields and body-specific constants. Parallel transmission RF pulses were designed using either SAR or temperature constraints, and compared with each other and unconstrained RF pulse design in terms of excitation fidelity and safety. The use of temperature correlation matrices resulted in better excitation fidelity compared with the use of SAR in parallel transmission RF pulse design (for the 6 min exposure period, 8.8% versus 21.0% for the head and 28.0% versus 32.2% for the hip region). As RF exposure duration increases (from 6 min to 24 min), the benefit of using temperature correlation matrices on RF pulse design diminishes. However, the safety of the subject is always guaranteed (the maximum temperature was equal to 39°C). This trend was observed in both head and hip regions, where the perfusion rates are very different. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Anjun; Wang, Jing; Lu, Jian-Yu

    2002-05-01

    Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  10. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  11. Recommendations for pharmacological clinical trials in children with functional constipation: The Rome foundation pediatric subcommittee on clinical trials.

    PubMed

    Koppen, I J N; Saps, M; Lavigne, J V; Nurko, S; Taminiau, J A J M; Di Lorenzo, C; Benninga, M A

    2018-04-01

    Evidence for the efficacy of commonly used drugs in the treatment of childhood functional constipation (FC) is scarce, studies are often of low quality and study designs are heterogeneous. Thus, recommendations for the design of clinical trials in childhood FC are needed. Members of the Rome Foundation and a member of the Pediatric Committee of the European Medicines Agency formed a committee to create recommendations for the design of clinical trials in children with FC. This committee recommends conducting randomized, double-blind, placebo-controlled, parallel-group clinical trials to assess the efficacy of new drugs for the treatment of childhood FC. Pediatric study participants should be included based on fulfilling the Rome IV criteria for FC. A treatment free run-in period for baseline assessment is recommended. The trial duration should be at least 8 weeks. Treatment success is defined as no longer meeting the Rome IV criteria for FC. Stool consistency should be reported based on the Bristol Stool Scale. Endpoints of drug efficacy need to be tailored to the developmental age of the patient population. © 2018 John Wiley & Sons Ltd.

  12. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  13. Uncertainty Quantification of Nonlinear Electrokinetic Response in a Microchannel-Membrane Junction

    NASA Astrophysics Data System (ADS)

    Alizadeh, Shima; Iaccarino, Gianluca; Mani, Ali

    2015-11-01

    We have conducted uncertainty quantification (UQ) for electrokinetic transport of ionic species through a hybrid microfluidic system using different probabilistic techniques. The system of interest is an H-configuration consisting of two parallel microchannels that are connected via a nafion junction. This system is commonly used for ion preconcentration and stacking by utilizing a nonlinear response at the channel-nafion junction that leads to deionization shocks. In this work, the nafion medium is modeled as many parallel nano-pores where, the nano-pore diameter, nafion porosity, and surface charge density are independent random variables. We evaluated the resulting uncertainty on the ion concentration fields as well as the deionization shock location. The UQ methods predicted consistent statistics for the outputs and the results revealed that the shock location is weakly sensitive to the nano-pore surface charge and primarily driven by nano-pore diameters. The present study can inform the design of electrokinetic networks with increased robustness to natural manufacturing variability. Applications include water desalination and lab-on-a-chip systems. Shima is a graduate student in the department of Mechanical Engineering at Stanford University. She received her Master's degree from Stanford in 2011. Her research interests include Electrokinetics in porous structures and high performance computing.

  14. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  15. Effect of synthetic prostaglandin E1 analog on gastric emptying of meals in man.

    PubMed

    Moore, J G; Alazraki, N; Clay, G D

    1986-01-01

    Forty-five subjects with healed duodenal ulcer were administered either a placebo or a low-dose or high-dose regimen of misoprostol, a synthetic PGE1 analog, in a double-blind, random, parallel-group design to assess the effect of this prostaglandin compound on the gastric emptying of liquid-solid meals. A dual-radionuclide technique to measure liquid- and solid-phase gastric emptying rates of physiological meals by external gamma camera imaging was used. All subjects had a pretreatment control (baseline) evaluation, followed one week later by a treatment-influenced emptying study. The results demonstrated that misoprostol did not significantly alter gastric emptying of either liquids or solids; however, these results cannot be extrapolated to other prostaglandin compounds because of the diverse and sometimes paradoxical effects of different prostaglandins on gastric motility.

  16. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  17. Design, Analysis, and Reporting of Crossover Trials for Inclusion in a Meta-Analysis.

    PubMed

    Li, Tianjing; Yu, Tsung; Hawkins, Barbara S; Dickersin, Kay

    2015-01-01

    To evaluate the characteristics of the design, analysis, and reporting of crossover trials for inclusion in a meta-analysis of treatment for primary open-angle glaucoma and to provide empirical evidence to inform the development of tools to assess the validity of the results from crossover trials and reporting guidelines. We searched MEDLINE, EMBASE, and Cochrane's CENTRAL register for randomized crossover trials for a systematic review and network meta-analysis we are conducting. Two individuals independently screened the search results for eligibility and abstracted data from each included report. We identified 83 crossover trials eligible for inclusion. Issues affecting the risk of bias in crossover trials, such as carryover, period effects and missing data, were often ignored. Some trials failed to accommodate the within-individual differences in the analysis. For a large proportion of the trials, the authors tabulated the results as if they arose from a parallel design. Precision estimates properly accounting for the paired nature of the design were often unavailable from the study reports; consequently, to include trial findings in a meta-analysis would require further manipulation and assumptions. The high proportion of poorly reported analyses and results has the potential to affect whether crossover data should or can be included in a meta-analysis. There is pressing need for reporting guidelines for crossover trials.

  18. Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE): a randomized controlled trial protocol.

    PubMed

    Winstein, Carolee J; Wolf, Steven L; Dromerick, Alexander W; Lane, Christianne J; Nelsen, Monica A; Lewthwaite, Rebecca; Blanton, Sarah; Scott, Charro; Reiss, Aimee; Cen, Steven Yong; Holley, Rahsaan; Azen, Stanley P

    2013-01-11

    Residual disability after stroke is substantial; 65% of patients at 6 months are unable to incorporate the impaired upper extremity into daily activities. Task-oriented training programs are rapidly being adopted into clinical practice. In the absence of any consensus on the essential elements or dose of task-specific training, an urgent need exists for a well-designed trial to determine the effectiveness of a specific multidimensional task-based program governed by a comprehensive set of evidence-based principles. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) Stroke Initiative is a parallel group, three-arm, single blind, superiority randomized controlled trial of a theoretically-defensible, upper extremity rehabilitation program provided in the outpatient setting.The primary objective of ICARE is to determine if there is a greater improvement in arm and hand recovery one year after randomization in participants receiving a structured training program termed Accelerated Skill Acquisition Program (ASAP), compared to participants receiving usual and customary therapy of an equivalent dose (DEUCC). Two secondary objectives are to compare ASAP to a true (active monitoring only) usual and customary (UCC) therapy group and to compare DEUCC and UCC. Following baseline assessment, participants are randomized by site, stratified for stroke duration and motor severity. 360 adults will be randomized, 14 to 106 days following ischemic or hemorrhagic stroke onset, with mild to moderate upper extremity impairment, recruited at sites in Atlanta, Los Angeles and Washington, D.C. The Wolf Motor Function Test (WMFT) time score is the primary outcome at 1 year post-randomization. The Stroke Impact Scale (SIS) hand domain is a secondary outcome measure.The design includes concealed allocation during recruitment, screening and baseline, blinded outcome assessment and intention to treat analyses. Our primary hypothesis is that the improvement in log-transformed WMFT time will be greater for the ASAP than the DEUCC group. This pre-planned hypothesis will be tested at a significance level of 0.05. ICARE will test whether ASAP is superior to the same number of hours of usual therapy. Pre-specified secondary analyses will test whether 30 hours of usual therapy is superior to current usual and customary therapy not controlled for dose. www.ClinicalTrials.gov Identifier: NCT00871715

  19. Role of Cysteines in Stabilizing the Randomized Receptor Binding Domains within Feline Leukemia Virus Envelope Proteins.

    PubMed

    Valdivieso-Torres, Leonardo; Sarangi, Anindita; Whidby, Jillian; Marcotrigiano, Joseph; Roth, Monica J

    2015-12-30

    Retargeting of gammaretroviral envelope proteins has shown promising results in the isolation of novel isolates with therapeutic potential. However, the optimal conditions required to obtain high-affinity retargeted envelope proteins with narrow tropism are not understood. This study highlights the advantage of constrained peptides within receptor binding domains and validates the random library screening technique of obtaining novel retargeted Env proteins. Using a modified vector backbone to screen the envelope libraries on 143B osteosarcoma cells, three novel and unique retargeted envelopes were isolated. The use of complex disulfide bonds within variable regions required for receptor binding is found within natural gammaretroviral envelope isolates. Interestingly, two of the isolates, named AII and BV2, have a pair of cysteines located within the randomized region of 11 amino acids similar to that identified within the CP Env, an isolate identified in a previous Env library screen on the human renal carcinoma Caki-1 cell line. The amino acids within the randomized region of AII and BV2 envelopes that are essential for viral infection have been identified in this study and include these cysteine residues. Through mutagenesis studies, the putative disulfide bond pairs including and beyond the randomized region were examined. In parallel, the disulfide bonds of CP Env were identified using mass spectrometry. The results indicate that this pair of cysteines creates the structural context to position key hydrophobic (F and W) and basic (K and H) residues critical for viral titer and suggest that AII, BV2, and CP internal cysteines bond together in distinct ways. Retargeted gammaretroviral particles have broad applications for therapeutic use. Although great advances have been achieved in identifying new Env-host cell receptor pairs, the rules for designing optimal Env libraries are still unclear. We have found that isolates with an additional pair of cysteines within the randomized region have the highest transduction efficiencies. This emphasizes the importance of considering cysteine pairs in the design of new libraries. Furthermore, our data clearly indicate that these cysteines are essential for viral infectivity by presenting essential residues to the host cell receptor. These studies facilitate the screening of Env libraries for functional entry into target cells, allowing the identification of novel gammaretroviral Envs targeting alternative host cell receptors for gene and protein delivery. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  20. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  1. The effect of three ergonomics training programs on the prevalence of low-back pain among workers of an Iranian automobile factory: a randomized clinical trial.

    PubMed

    Aghilinejad, M; Bahrami-Ahmadi, A; Kabir-Mokamelkhah, E; Sarebanha, S; Hosseini, H R; Sadeghi, Z

    2014-04-01

    Many workers suffer from low-back pain. Type and severity of spinal complaints have relationship with work load. Lack of adherence to ergonomics recommendations among the important causes of low-back pain. To assess the effect of 3 ergonomics training programs on the prevalence of lowback pain among workers of an Iranian automobile factory. In a parallel-design 4-arm randomized clinical trial, 760 active workers of an automobile factory were studied. 503 workers were found eligible and randomized into 3 intervention groups (n=252), and a control group (n=251). The intervention groups consisted of 3 arms: 84 workers were educated by pamphlet, 84 by lectures, and 84 by workshop. Nordic questionnaire was used to determine the prevalence of spinal complaint before and 1-year after the interventions. The trial is registered with the Iranian Randomized Clinical Trial Registry, number IRCT2013061213182N2. Out of 503 workers, 52 lost to follow-up leaving 451 workers for analyses. The prevalence of low-back pain at the baseline was not significantly different among the studied arms. 1-year after the interventions, the prevalence did not change significantly from the baseline values for the lecture and pamphlet group. However, the prevalence of LBP experienced during the last year significantly (p=0.036) decreased from 42% to 23% in participant took part in the workshop. Training of automobile factory workers in ergonomics is more effective by running workshop than giving lecture or disseminating pamphlet.

  2. The benefits of giving a massage on the mental state of massage therapists: a randomized, controlled trial.

    PubMed

    Jensen, Anne M; Ramasamy, Adaikalavan; Hotek, Judith; Roel, Brian; Riffe, Drew

    2012-12-01

    The objective of this study was to determine whether giving a massage had an impact of the mental state of the massage therapist. The design of this study was a randomized, controlled, blinded study with two parallel groups. This study was conducted at an accredited school of therapeutic massage in Dallas, Texas. The study comprised healthy female and male final-term massage students between ages 18 and 65 years. The participants were randomized into two groups: (1) the experimental group who gave a 1-hour Swedish massage to a massage client (Massage group), or (2) the control group who sat in a room doing normal, daily activities (Control group). Both these activities were a normal part of the daily routine for these massage students. The primary outcomes were the change in the Depression Anxiety and Stress Scale (DASS) scores pre- and postparticipation. Twenty-two (22) participants were randomized in this trial. The baseline characteristics were comparable between the two groups. A statistically significant advantage for the massage group was found relative to the control group in subjective anxiety (DASS Anxiety Subscale, p=0.014). There were no significant differences between the groups with regard to total DASS score (p=0.540), subjective depressive symptoms (DASS Depression Subscale, p=0.472) and subjective stress-related symptoms (DASS Stress Subscale, p=0.919). There were no adverse events reported by any participant. This study shows that massage therapists themselves may benefit from giving a therapeutic massage by experiencing less subjective anxiety following the giving of a massage.

  3. Safety, Tolerability, and Pharmacokinetics of Therapeutic and Supratherapeutic Doses of Tramadol Hydrochloride in Healthy Adults: A Randomized, Double-Blind, Placebo-Controlled Multiple-Ascending-Dose Study.

    PubMed

    DeLemos, Byron; Richards, Henry M; Vandenbossche, Joris; Ariyawansa, Jay; Natarajan, Jaya; Alexander, Binu; Ramakrishna, Tage; Murtaugh, Thomas; Stahlberg, Hans-Jürgen

    2017-11-01

    This randomized, double-blind, parallel-group multiple-ascending-dose study evaluated the safety, tolerability, and pharmacokinetics of tramadol hydrochloride in healthy adults to inform dosage and design for a subsequent QT/QTc study. Healthy men and women, 18 to 45 years old (inclusive), were sequentially assigned to the tramadol 200, 400, or 600 mg/day treatment cohort and within each cohort, randomized (4:1) to either tramadol or placebo every 6 hours for 9 oral doses. Of the 24 participants randomized to tramadol (n = 8/cohort), 22 (91.7%) completed the study. The AUC tau,ss of tramadol increased approximately 2.2- and 3.6-fold for the (+) enantiomer and 2.0- and 3.5-fold for the (-) enantiomer with increasing dose from 200 to 400  and 600 mg/day, whereas the C max,ss increased 2.1- and 3.3-fold for the (+) enantiomer and 2.0- and 3.2-fold for the (-) enantiomer. Overall, 21 participants (87.5%) participants reported ≥1 treatment-emergent adverse event; most frequent were nausea (17 of 24, 70.8%) and vomiting (7 of 24, 29.2%). Vomiting (affected participants and events) increased with increasing dose from 200 to 600 mg/day but was mild (5 of 24) or moderate (2 of 24) in severity. All tested dosage regimens of tramadol showed acceptable safety and tolerability profile for further investigation in a thorough QT/QTc study. © 2017, The American College of Clinical Pharmacology.

  4. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  5. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  6. DCL System Using Deep Learning Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes

  7. Design of miniature type parallel coupled microstrip hairpin filter in UHF range

    NASA Astrophysics Data System (ADS)

    Hasan, Adib Belhaj; Rahman, Maj Tarikur; Kahhar, Azizul; Trina, Tasnim; Saha, Pran Kanai

    2017-12-01

    A microstrip parallel coupled line bandpass filter is designed in UHF range and the filter size is reduced by microstrip hairpin structure. The FR4 substrate is used as base material of the filter. The filter is analyzed by both ADS and CST design studio in the frequency range of 500 MHz to 650 MHz. The Bandwidth is found 13.27% with a center frequency 570 MHz. Simulation from both ADS and CST shows a very good agreement of performance of the filter.

  8. Xyce Parallel Electronic Simulator Users' Guide Version 6.7.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one tomore » develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2017 Sandia Corporation. All rights reserved. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less

  9. Lidcombe Program Webcam Treatment for Early Stuttering: A Randomized Controlled Trial.

    PubMed

    Bridgman, Kate; Onslow, Mark; O'Brian, Susan; Jones, Mark; Block, Susan

    2016-10-01

    Webcam treatment is potentially useful for health care in cases of early stuttering in which clients are isolated from specialized treatment services for geographic and other reasons. The purpose of the present trial was to compare outcomes of clinic and webcam deliveries of the Lidcombe Program treatment (Packman et al., 2015) for early stuttering. The design was a parallel, open plan, noninferiority randomized controlled trial of the standard Lidcombe Program treatment and the experimental webcam Lidcombe Program treatment. Participants were 49 children aged 3 years 0 months to 5 years 11 months at the start of treatment. Primary outcomes were the percentage of syllables stuttered at 9 months postrandomization and the number of consultations to complete Stage 1 of the Lidcombe Program. There was insufficient evidence of a posttreatment difference of the percentage of syllables stuttered between the standard and webcam Lidcombe Program treatments. There was insufficient evidence of a difference between the groups for typical stuttering severity measured by parents or the reported clinical relationship with the treating speech-language pathologist. This trial confirmed the viability of the webcam Lidcombe Program intervention. It appears to be as efficacious and economically viable as the standard, clinic Lidcombe Program treatment.

  10. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  11. Hemostatic Agents in Periapical Surgery: A Randomized Study of Gauze Impregnated in Epinephrine versus Aluminum Chloride.

    PubMed

    Menéndez-Nieto, Isabel; Cervera-Ballester, Juan; Maestre-Ferrín, Laura; Blaya-Tárraga, Juan Antonio; Peñarrocha-Oltra, David; Peñarrocha-Diago, Miguel

    2016-11-01

    Adequate bleeding control is essential for the success of periapical surgery. The aim of this study was to evaluate the effects of 2 hemostatic agents on the outcome of periapical surgery and their relationship with patient and teeth parameters. A prospective study was designed with 2 randomized parallel groups, depending on the hemostatic agent used: gauze impregnated in epinephrine (epinephrine group) and aluminum chloride (aluminum chloride group). The analysis of the hemorrhage control was judged before and after the application of the hemostatic agents by the surgeon, and 2 examiners independently recorded it as adequate (complete hemorrhage control) or inadequate (incomplete hemorrhage control). Ninety-nine patients with a periradicular lesion were enrolled in this study and divided into 2 groups: gauze impregnated in epinephrine in 48 patients (epinephrine group) or aluminum chloride in 51 (aluminum chloride group). In epinephrine group adequate hemostasis was achieved in 25 cases, and in aluminum chloride group it was achieved in 37 cases (P < .05). The outcome was better in the aluminum chloride group than in the gauze impregnated in epinephrine group. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  12. Xyce Parallel Electronic Simulator Users' Guide Version 6.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  13. Why caution is recommended with post-hoc individual patient matching for estimation of treatment effect in parallel-group randomized controlled trials: the case of acute stroke trials.

    PubMed

    Jafari, Nahid; Hearne, John; Churilov, Leonid

    2013-11-10

    A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

  15. Does Oral Implant Design Affect Marginal Bone Loss? Results of a Parallel-Group Randomized Controlled Equivalence Trial

    PubMed Central

    Bateli, Maria; Ben Rahal, Ghada; Christmann, Marin; Vach, Kirstin; Kohal, Ralf-Joachim

    2018-01-01

    Objective To test whether or not the modified design of the test implant (intended to increase primary stability) has an equivalent effect on MBL compared to the control. Methods Forty patients were randomly assigned to receive test or control implants to be installed in identically dimensioned bony beds. Implants were radiographically monitored at installation, at prosthetic delivery, and after one year. Treatments were considered equivalent if the 90% confidence interval (CI) for the mean difference (MD) in MBL was in between −0.25 and 0.25 mm. Additionally, several soft tissue parameters and patient-reported outcome measures (PROMs) were evaluated. Linear mixed models were fitted for each patient to assess time effects on response variables. Results Thirty-three patients (21 males, 12 females; 58.2 ± 15.2 years old) with 81 implants (47 test, 34 control) were available for analysis after a mean observation period of 13.9 ± 4.5 months (3 dropouts, 3 missed appointments, and 1 missing file). The adjusted MD in MBL after one year was −0.13 mm (90% CI: −0.46–0.19; test group: −0.49; control group: −0.36; p = 0.507). Conclusion Both implant systems can be considered successful after one year of observation. Concerning MBL in the presented setup, equivalence of the treatments cannot be concluded. Registration This trial is registered with the German Clinical Trials Register (ID: DRKS00007877). PMID:29610765

  16. Design innovations and baseline findings in a long-term Parkinson’s trial: NET-PD LS-1

    PubMed Central

    2012-01-01

    Background Based on the pre-clinical and the results of a phase 2 futility study, creatine was selected for an efficacy trial in Parkinson’s disease (PD). We present the design rationale and a description of the study cohort at baseline. Methods A randomized, multicenter, double-blind, parallel group, placebo controlled Phase 3 study of creatine (10 gm daily) in participants with early, treated PD, the Long-term Study – 1 (LS-1) is being conducted by the NINDS Exploratory Trials in Parkinson’s Disease (NET-PD) network. The study utilizes a global statistical test (GST) encompassing multiple clinical rating scales to provide a multidimensional assessment of disease progression. Results A total of 1,741 PD participants from 45 sites in the U.S. and Canada were randomized 1:1 to either 10-gm creatine/day or matching placebo. Participants are being evaluated for a minimum of 5 years. The LS-1 baseline cohort includes participants treated with dopaminergic therapy and generally mild PD. Conclusions LS-1 represents the largest cohort of patients with early treated PD ever enrolled in a clinical trial. The GST approach should provide high power to test the hypothesis that daily administration of creatine (10gm/day) is more effective than placebo in slowing clinical decline in PD between baseline and the 5 year follow-up visit against the background of dopaminergic therapy and best PD care. PMID:23079770

  17. Protection of xenon against postoperative oxygen impairment in adults undergoing Stanford Type-A acute aortic dissection surgery

    PubMed Central

    Jin, Mu; Cheng, Yi; Yang, Yanwei; Pan, Xudong; Lu, Jiakai; Cheng, Weiping

    2017-01-01

    Abstract Objectives: The available evidence shows that hypoxemia after Stanford Type-A acute aortic dissection (AAD) surgery is a frequent cause of several adverse consequences. The pathogenesis of postoperative hypoxemia after AAD surgery is complex, and ischemia/reperfusion and inflammation are likely to be underlying risk factors. Xenon, recognized as an ideal anesthetic and anti-inflammatory treatment, might be a possible treatment for these adverse effects. Methods/Design: The trial is a prospective, double-blind, 4-group, parallel, randomized controlled, a signal-center clinical trial. We will recruit 160 adult patients undergoing Stanford type-A AAD surgery. Patients will be allocated a study number and will be randomized on a 1:1:1:1 basis to receive 1 of the 3 treatment options (pulmonary inflated with 50% xenon, 75% xenon, or 100% xenon) or no treatment (control group, pulmonary inflated with 50% nitrogen). The aims of this study are to clarify the lung protection capability of xenon and its possible mechanisms in patients undergoing the Stanford type-A AAD surgery. Discussion: This trial uses an innovative design to account for the xenon effects of postoperative oxygen impairment, and it also delineates the mechanism for any benefit from xenon. The investigational xenon group is considered a treatment intervention, as it includes 3 groups of pulmonary static inflation with 50%, 75%, and 100% xenon. It is suggested that future trials might define an appropriate concentration of xenon for the best practice intervention. PMID:28834897

  18. A novel visual hardware behavioral language

    NASA Technical Reports Server (NTRS)

    Li, Xueqin; Cheng, H. D.

    1992-01-01

    Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.

  19. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  20. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  1. Programming Probabilistic Structural Analysis for Parallel Processing Computer

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.

    1991-01-01

    The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.

  2. Design and analysis of all-dielectric broadband nonpolarizing parallel-plate beam splitters.

    PubMed

    Wang, Wenliang; Xiong, Shengming; Zhang, Yundong

    2007-06-01

    Past research on the all-dielectric nonpolarizing beam splitter is reviewed. With the aid of the needle thin-film synthesis method and the conjugate graduate refine method, three different split ratio nonpolarizing parallel-plate beam splitters over a 200 nm spectral range centered at 550 nm with incidence angles of 45 degrees are designed. The chosen materials component and the initial stack are based on the Costich and Thelen theories. The results of design and analysis show that the designs maintain a very low polarization ratio in the working range of the spectrum and has a reasonable angular field.

  3. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  4. A Next-Generation Parallel File System Environment for the OLCF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul

    2012-01-01

    When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less

  5. Double jeopardy in inferring cognitive processes

    PubMed Central

    Fific, Mario

    2014-01-01

    Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2n. In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs. PMID:25374545

  6. Effectiveness of a Proactive Mail-Based Alcohol Internet Intervention for University Students: Dismantling the Assessment and Feedback Components in a Randomized Controlled Trial

    PubMed Central

    McCambridge, Jim; Bendtsen, Marcus; Karlsson, Nadine; Nilsen, Per

    2012-01-01

    Background University students in Sweden routinely receive proactive mail-based alcohol Internet interventions sent from student health services. This intervention provides personalized normative feedback on alcohol consumption with suggestions on how to decrease drinking. Earlier feasibility trials by our group and others have examined effectiveness in simple parallel-groups designs. Objective To evaluate the effectiveness of electronic screening and brief intervention, using a randomized controlled trial design that takes account of baseline assessment reactivity (and other possible effects of the research process) due to the similarity between the intervention and assessment content. The design of the study allowed for exploration of the magnitude of the assessment effects per se. Methods This trial used a dismantling design and randomly assigned 5227 students to 3 groups: (1) routine practice assessment and feedback, (2) assessment-only without feedback, and (3) neither assessment nor feedback. At baseline all participants were blinded to study participation, with no contact being made with group 3. We approached students 2 months later to participate in a cross-sectional alcohol survey. All interventions were fully automated and did not have any human involvement. All data used in the analysis were based on self-assessment using questionnaires. The participants were unaware that they were participating in a trial and thus were also blinded to which group they were randomly assigned. Results Overall, 44.69% (n = 2336) of those targeted for study completed follow-up. Attrition was similar in groups 1 (697/1742, 40.01%) and 2 (737/1742, 42.31% retained) and lower in group 3 (902/1743, 51.75% retained). Intention-to-treat analyses among all participants regardless of their baseline drinking status revealed no differences between groups in all alcohol parameters at the 2-month follow-up. Per-protocol analyses of groups 1 and 2 among those who accepted the email intervention (36.2% of the students who were offered the intervention in group 1 and 37.3% of the students in group2 ) and who were risky drinkers at baseline (60.7% follow-up rate in group 1 and 63.5% in group 2) suggested possible small beneficial effects on weekly consumption attributable to feedback. Conclusions This approach to outcome evaluation is highly conservative, and small benefits may follow the actual uptake of feedback intervention in students who are risky drinkers, the precise target group. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 24735383; http://www.controlled-trials.com/ISRCTN24735383 (Archived by WebCite at http://www.webcitation.org/6Awq7gjXG) PMID:23113955

  7. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  8. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segmentmore » of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.« less

  9. Compact hohlraum configuration with parallel planar-wire-array x-ray sources at the 1.7-MA Zebra generator.

    PubMed

    Kantsyrev, V L; Chuvatin, A S; Rudakov, L I; Velikovich, A L; Shrestha, I K; Esaulov, A A; Safronova, A S; Shlyaptseva, V V; Osborne, G C; Astanovitsky, A L; Weller, M E; Stafford, A; Schultz, K A; Cooper, M C; Cuneo, M E; Jones, B; Vesey, R A

    2014-12-01

    A compact Z-pinch x-ray hohlraum design with parallel-driven x-ray sources is experimentally demonstrated in a configuration with a central target and tailored shine shields at a 1.7-MA Zebra generator. Driving in parallel two magnetically decoupled compact double-planar-wire Z pinches has demonstrated the generation of synchronized x-ray bursts that correlated well in time with x-ray emission from a central reemission target. Good agreement between simulated and measured hohlraum radiation temperature of the central target is shown. The advantages of compact hohlraum design applications for multi-MA facilities are discussed.

  10. [Parodontocid efficiency in complex treatment and prevention of gingivitis].

    PubMed

    Makeeva, I M; Turkina, A Iu; Poliakova, M A; Babina, K S

    2013-01-01

    Antiplaque/antigingivitis effect of an alcohol-free mouthrinse Parodontocid were evaluated by randomized parallel group clinical trial. Sixty patients with gingivitis were clinically examined to determine PHP, RMNPI and PMA indexes. After professional dental prophylaxis, subjects were randomly assigned in two groups to 10 days oral hygiene program. Group 1 patients used only toothbrush and prophylactic toothpaste while in group 2 persons used Parodontocid in conjunction with normal brushing and flossing.Parodontocid significantly reduced plaque and gingivitis compared to negative control.

  11. Systematic Review of Integrative Health Care Research: Randomized Control Trials, Clinical Controlled Trials, and Meta-Analysis

    DTIC Science & Technology

    2010-01-01

    to usual care (control). Also, in the pilot study of the 4 individual Noetic therapies, off-site prayer was associated with the lowest absolute...mortality in-hospital and at 6 months [16]. The parallel randomization to 4 different Noetic therapies across 5 study arms limited the assessment of...interventional cardiac care: the Monitoring and Actualisation of Noetic Trainings (MANTRA) II randomised study ,” Lancet, vol. 366, no. 9481, pp. 211–217, 2005. [18

  12. Parallel/distributed direct method for solving linear systems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

  13. Critical appraisal of clinical trials in multiple system atrophy: Toward better quality.

    PubMed

    Castro Caldas, Ana; Levin, Johannes; Djaldetti, Ruth; Rascol, Olivier; Wenning, Gregor; Ferreira, Joaquim J

    2017-10-01

    Multiple system atrophy (MSA) is a rare neurodegenerative disease of undetermined cause. Although many clinical trials have been conducted, there is still no treatment that cures the disease or slows its progression. We sought to assess the clinical trials, methodology, and quality of reporting of clinical trails conducted in MSA patients. We conducted a systematic review of all trials with at least 1 MSA patient subject to any pharmacological/nonpharmacological interventions. Two independent reviewers evaluated the methodological characteristics and quality of reporting of trials. A total of 60 clinical trials were identified, including 1375 MSA patients. Of the trials, 51% (n = 31) were single-arm studies. A total of 28% (n = 17) had a parallel design, half of which (n = 13) were placebo controlled. Of the studies, 8 (13.3%) were conducted in a multicenter setting, 3 of which were responsible for 49.3% (n = 678) of the total included MSA patients. The description of primary outcomes was unclear in 60% (n = 40) of trials. Only 10 (16.7%) clinical trials clearly described the randomization process. Blinding of the participants, personnel, and outcome assessments were at high risk of bias in the majority of studies. The number of dropouts/withdrawals was high (n = 326, 23.4% among the included patients). Overall, the design and quality of reporting of the reviewed studies is unsatisfactory. The most frequent clinical trials were small and single centered. Inadequate reporting was related to the information on the randomization process, sequence generation, allocation concealment, blinding of participants, and sample size calculations. Although improved during the recent years, methodological quality and trial design need to be optimized to generate more informative results. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.

  14. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  15. NAS Requirements Checklist for Job Queuing/Scheduling Software

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.

  16. TECA: A Parallel Toolkit for Extreme Climate Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, Mr; Ruebel, Oliver; Byna, Surendra

    2012-03-12

    We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.

  17. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  18. An All-Freeze-Casting Strategy to Design Typographical Supercapacitors with Integrated Architectures.

    PubMed

    Wang, Qingrong; Wang, Xinyu; Wan, Fang; Chen, Kena; Niu, Zhiqiang; Chen, Jun

    2018-06-01

    The emergence of flexible and wearable electronics has raised the demand for flexible supercapacitors with accurate sizes and aesthetic shapes. Here, a strategy is developed to prepare flexible all-in-one integrated supercapacitors by combining all-freeze-casting with typography technique. The continuous seamless connection of all-in-one supercapacitor devices enhances the load and/or electron transfer capacity and avoids displacing and detaching between their neighboring components at bending status. Therefore, such a unique structure of all-in-one integrated devices is beneficial for retaining stable electrochemical performance at different bending levels. More importantly, the sizes and aesthetic shapes of integrated supercapacitors could be controlled by the designed molds, like type matrices of typography. The molds could be assembled together and typeset randomly, achieving the controllable construction and series and/or parallel connection of several supercapacitor devices. The preparation of flexible integrated supercapacitors will pave the way for assembling programmable all-in-one energy storage devices into highly flexible electronics. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Design and Analysis of a Neuromemristive Reservoir Computing Architecture for Biosignal Processing

    PubMed Central

    Kudithipudi, Dhireesha; Saleh, Qutaiba; Merkel, Cory; Thesing, James; Wysocki, Bryant

    2016-01-01

    Reservoir computing (RC) is gaining traction in several signal processing domains, owing to its non-linear stateful computation, spatiotemporal encoding, and reduced training complexity over recurrent neural networks (RNNs). Previous studies have shown the effectiveness of software-based RCs for a wide spectrum of applications. A parallel body of work indicates that realizing RNN architectures using custom integrated circuits and reconfigurable hardware platforms yields significant improvements in power and latency. In this research, we propose a neuromemristive RC architecture, with doubly twisted toroidal structure, that is validated for biosignal processing applications. We exploit the device mismatch to implement the random weight distributions within the reservoir and propose mixed-signal subthreshold circuits for energy efficiency. A comprehensive analysis is performed to compare the efficiency of the neuromemristive RC architecture in both digital(reconfigurable) and subthreshold mixed-signal realizations. Both Electroencephalogram (EEG) and Electromyogram (EMG) biosignal benchmarks are used for validating the RC designs. The proposed RC architecture demonstrated an accuracy of 90 and 84% for epileptic seizure detection and EMG prosthetic finger control, respectively. PMID:26869876

  20. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  1. Parallel particle impactor - novel size-selective particle sampler for accurate fractioning of inhalable particles

    NASA Astrophysics Data System (ADS)

    Trakumas, S.; Salter, E.

    2009-02-01

    Adverse health effects due to exposure to airborne particles are associated with particle deposition within the human respiratory tract. Particle size, shape, chemical composition, and the individual physiological characteristics of each person determine to what depth inhaled particles may penetrate and deposit within the respiratory tract. Various particle inertial classification devices are available to fractionate airborne particles according to their aerodynamic size to approximate particle penetration through the human respiratory tract. Cyclones are most often used to sample thoracic or respirable fractions of inhaled particles. Extensive studies of different cyclonic samplers have shown, however, that the sampling characteristics of cyclones do not follow the entire selected convention accurately. In the search for a more accurate way to assess worker exposure to different fractions of inhaled dust, a novel sampler comprising several inertial impactors arranged in parallel was designed and tested. The new design includes a number of separated impactors arranged in parallel. Prototypes of respirable and thoracic samplers each comprising four impactors arranged in parallel were manufactured and tested. Results indicated that the prototype samplers followed closely the penetration characteristics for which they were designed. The new samplers were found to perform similarly for liquid and solid test particles; penetration characteristics remained unchanged even after prolonged exposure to coal mine dust at high concentration. The new parallel impactor design can be applied to approximate any monotonically decreasing penetration curve at a selected flow rate. Personal-size samplers that operate at a few L/min as well as area samplers that operate at higher flow rates can be made based on the suggested design. Performance of such samplers can be predicted with high accuracy employing well-established impaction theory.

  2. Electromagnetic Design of a Magnetically-Coupled Spatial Power Combiner

    NASA Technical Reports Server (NTRS)

    Bulcha, B.; Cataldo, G.; Stevenson, T. R.; U-Yen, K.; Moseley, S. H.; Wollack, E. J.

    2017-01-01

    The design of a two-dimensional beam-combining network employing a parallel-plate superconducting waveguide with a mono-crystalline silicon dielectric is presented. This novel beam-combining network structure employs an array of magnetically coupled antenna elements to achieve high coupling efficiency and full sampling of the intensity distribution while avoiding diffractive losses in the multi-mode region defined by the parallel-plate waveguide. These attributes enable the structures use in realizing compact far-infrared spectrometers for astrophysical and instrumentation applications. When configured with a suitable corporate-feed power-combiner, this fully sampled array can be used to realize a low-sidelobe apodized response without incurring a reduction in coupling efficiency. To control undesired reflections over a wide range of angles in the finite-sized parallel-plate waveguide region, a wideband meta-material electromagnetic absorber structure is implemented. This adiabatic structure absorbs greater than 99 of the power over the 1.7:1 operational band at angles ranging from normal (0 degree) to near parallel (180 degree) incidence. Design, simulations, and application of the device will be presented.

  3. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  4. Predicting the stability of a compressible periodic parallel jet flow

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey H.

    1996-01-01

    It is known that mixing enhancement in compressible free shear layer flows with high convective Mach numbers is difficult. One design strategy to get around this is to use multiple nozzles. Extrapolating this design concept in a one dimensional manner, one arrives at an array of parallel rectangular nozzles where the smaller dimension is omega and the longer dimension, b, is taken to be infinite. In this paper, the feasibility of predicting the stability of this type of compressible periodic parallel jet flow is discussed. The problem is treated using Floquet-Bloch theory. Numerical solutions to this eigenvalue problem are presented. For the case presented, the interjet spacing, s, was selected so that s/omega =2.23. Typical plots of the eigenvalue and stability curves are presented. Results obtained for a range of convective Mach numbers from 3 to 5 show growth rates omega(sub i)=kc(sub i)/2 range from 0.25 to 0.29. These results indicate that coherent two-dimensional structures can occur without difficulty in multiple parallel periodic jet nozzles and that shear layer mixing should occur with this type of nozzle design.

  5. Semi-Individualized Homeopathy Add-On Versus Usual Care Only for Premenstrual Disorders: A Randomized, Controlled Feasibility Study.

    PubMed

    Klein-Laansma, Christien T; Jong, Mats; von Hagens, Cornelia; Jansen, Jean Pierre C H; van Wietmarschen, Herman; Jong, Miek C

    2018-03-22

    Premenstrual syndrome and premenstrual dysphoric disorder (PMS/PMDD) bother a substantial number of women. Homeopathy seems a promising treatment, but it needs investigation using reliable study designs. The feasibility of organizing an international randomized pragmatic trial on a homeopathic add-on treatment (usual care [UC] + HT) compared with UC alone was evaluated. A multicenter, randomized, controlled pragmatic trial with parallel groups. The study was organized in general and private homeopathic practices in the Netherlands and Sweden and in an outpatient university clinic in Germany. Women diagnosed as having PMS/PMDD, based on prospective daily rating by the daily record of severity of problems (DRSP) during a period of 2 months, were included and randomized. Women were to receive UC + HT or UC for 4 months. Homeopathic medicine selection was according to a previously tested prognostic questionnaire and electronic algorithm. Usual care was as provided by the women's general practitioner according to their preferences. Before and after treatment, the women completed diaries (DRSP), the measure yourself concerns and well-being, and other questionnaires. Intention-to-treat (ITT) and per protocol (PP) analyses were performed. In Germany, the study could not proceed because of legal limitations. In Sweden, recruitment proved extremely difficult. In the Netherlands and Sweden, 60 women were randomized (UC + HT: 28; UC: 32), data of 47/46 women were analyzed (ITT/PP). After 4 months, relative mean change of DRSP scores in the UC + HT group was significantly better than in the UC group (p = 0.03). With respect to recruitment and different legal status, it does not seem feasible to perform a larger, international, pragmatic randomized trial on (semi-)individualized homeopathy for PMS/PMDD. Since the added value of HT compared with UC was demonstrated by significant differences in symptom score changes, further studies are warranted.

  6. A Mixed-Methods, Randomized, Controlled Feasibility Trial to Inform the Design of a Phase III Trial to Test the Effect of the Handheld Fan on Physical Activity and Carer Anxiety in Patients With Refractory Breathlessness.

    PubMed

    Johnson, Miriam J; Booth, Sara; Currow, David C; Lam, Lawrence T; Phillips, Jane L

    2016-05-01

    The handheld fan is an inexpensive and safe way to provide facial airflow, which may reduce the sensation of chronic refractory breathlessness, a frequently encountered symptom. To test the feasibility of developing an adequately powered, multicenter, multinational randomized controlled trial comparing the efficacy of a handheld fan and exercise advice with advice alone in increasing activity in people with chronic refractory breathlessness from a variety of medical conditions, measuring recruitment rates; data quality; and potential primary outcome measures. This was a Phase II, multisite, international, parallel, nonblinded, mixed-methods randomized controlled trial. Participants were centrally randomized to fan or control. All received breathlessness self-management/exercise advice and were followed up weekly for four weeks. Participants/carers were invited to participate in a semistructured interview at the study's conclusion. Ninety-seven people were screened, 49 randomized (mean age 68 years; 49% men), and 43 completed the study. Site recruitment varied from 0.25 to 3.3/month and screening:randomization from 1.1:1 to 8.5:1. There were few missing data except for the Chronic Obstructive Pulmonary Disease Self-Efficacy Scale (two-thirds of data missing). No harms were observed. Three interview themes included 1) a fan is a helpful self-management strategy, 2) a fan aids recovery, and 3) a symptom control trial was welcome. A definitive, multisite trial to study the use of the handheld fan as part of self-management of chronic refractory breathlessness is feasible. Participants found the fan useful. However, the value of information for changing practice or policy is unlikely to justify the expense of such a trial, given perceived benefits, the minimal costs, and an absence of harms demonstrated in this study. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  7. Calcium/vitamin D supplementation, serum 25-hydroxyvitamin D concentrations, and cholesterol profiles in the Women's Health Initiative calcium/vitamin D randomized trial.

    PubMed

    Schnatz, Peter F; Jiang, Xuezhi; Vila-Wright, Sharon; Aragaki, Aaron K; Nudy, Matthew; O'Sullivan, David M; Jackson, Rebecca; LeBlanc, Erin; Robinson, Jennifer G; Shikany, James M; Womack, Catherine R; Martin, Lisa W; Neuhouser, Marian L; Vitolins, Mara Z; Song, Yiqing; Kritchevsky, Stephen; Manson, JoAnn E

    2014-08-01

    The objective of this study was to evaluate whether increased serum 25-hydroxyvitamin D3 (25OHD3) concentrations, in response to calcium/vitamin D (CaD) supplementation, are associated with improved lipids in postmenopausal women. The parent trial was a double-blind, randomized, placebo-controlled, parallel-group trial designed to test the effects of CaD supplementation (1,000 mg of elemental calcium + 400 IU of vitamin D3 daily) versus placebo in postmenopausal women. Women from the general community, including multiple sites in the United States, were enrolled between 1993 and 1998. This cohort included 300 white, 200 African-American, and 100 Hispanic participants who were randomly selected from the Women's Health Initiative CaD trial. Serum 25OHD3 and lipid (fasting plasma triglycerides [TG], high-density lipoprotein cholesterol [HDL-C], and calculated low-density lipoprotein cholesterol [LDL-C]) levels were assessed before and after CaD randomization. There was a 38% increase in mean serum 25OHD3 concentrations after 2 years (95% CI, 1.29-1.47, P < 0.001) for women randomized to CaD (24.3 ng/mL postrandomization mean) compared with placebo (18.2 ng/mL). Women randomized to CaD had a 4.46-mg/dL mean decrease in LDL-C (P = 0.03). Higher concentrations of 25OHD3 were associated with higher HDL-C levels (P = 0.003), along with lower LDL-C and TG levels (P = 0.02 and P < 0.001, respectively). Supplemental CaD significantly increases 25OHD3 concentrations and decreases LDL-C. Women with higher 25OHD3 concentrations have more favorable lipid profiles, including increased HDL-C, lower LDL-C, and lower TG. These results support the hypothesis that higher concentrations of 25OHD3, in response to CaD supplementation, are associated with improved LDL-C.

  8. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  9. OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems.

    PubMed

    Stone, John E; Gohara, David; Shi, Guochun

    2010-05-01

    We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures.

  10. Efficacy of an adjunctive brief psychodynamic psychotherapy to usual inpatient treatment of depression: rationale and design of a randomized controlled trial

    PubMed Central

    2012-01-01

    Background A few recent studies have found indications of the effectiveness of inpatient psychotherapy for depression, usually of an extended duration. However, there is a lack of controlled studies in this area and to date no study of adequate quality on brief psychodynamic psychotherapy for depression during short inpatient stay exists. The present article describes the protocol of a study that will examine the relative efficacy, the cost-effectiveness and the cost-utility of adding an Inpatient Brief Psychodynamic Psychotherapy to pharmacotherapy and treatment-as-usual for inpatients with unipolar depression. Methods/Design The study is a one-month randomized controlled trial with a two parallel group design and a 12-month naturalistic follow-up. A sample of 130 consecutive adult inpatients with unipolar depression and Montgomery-Asberg Depression Rating Scale score over 18 will be recruited. The study is carried out in the university hospital section for mood disorders in Lausanne, Switzerland. Patients are assessed upon admission, and at 1-, 3- and 12- month follow-ups. Inpatient therapy is a manualized brief intervention, combining the virtues of inpatient setting and of time-limited dynamic therapies (focal orientation, fixed duration, resource-oriented interventions). Treatment-as-usual represents the best level of practice for a minimal treatment condition usually proposed to inpatients. Final analyses will follow an intention–to-treat strategy. Depressive symptomatology is the primary outcome and secondary outcome includes measures of psychiatric symptomatology, psychosocial role functioning, and psychodynamic-emotional functioning. The mediating role of the therapeutic alliance is also examined. Allocation to treatment groups uses a stratified block randomization method with permuted block. To guarantee allocation concealment, randomization is done by an independent researcher. Discussion Despite the large number of studies on treatment of depression, there is a clear lack of controlled research in inpatient psychotherapy during the acute phase of a major depressive episode. Research on brief therapy is important to take into account current short lengths of stay in psychiatry. The current study has the potential to scientifically inform appropriate inpatient treatment. This study is the first to address the issue of the economic evaluation of inpatient psychotherapy. Trial registration Australian New Zealand Clinical Trial Registry (ACTRN12612000909820) PMID:23110608

  11. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  12. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  13. Alternatives for randomization in lifestyle intervention studies in cancer patients were not better than conventional randomization.

    PubMed

    Velthuis, Miranda J; May, Anne M; Monninkhof, Evelyn M; van der Wall, Elsken; Peeters, Petra H M

    2012-03-01

    Assessing effects of lifestyle interventions in cancer patients has some specific challenges. Although randomization is urgently needed for evidence-based knowledge, sometimes it is difficult to apply conventional randomization (i.e., consent preceding randomization and intervention) in daily settings. Randomization before seeking consent was proposed by Zelen, and additional modifications were proposed since. We discuss four alternatives for conventional randomization: single and double randomized consent design, two-stage randomized consent design, and the design with consent to postponed information. We considered these designs when designing a study to assess the impact of physical activity on cancer-related fatigue and quality of life. We tested the modified Zelen design with consent to postponed information in a pilot. The design was chosen to prevent drop out of participants in the control group because of disappointment about the allocation. The result was a low overall participation rate most likely because of perceived lack of information by eligible patients and a relatively high dropout in the intervention group. We conclude that the alternatives were not better than conventional randomization. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. SCORPIO: A Scalable Two-Phase Parallel I/O Library With Application To A Large Scale Subsurface Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T

    2013-01-01

    Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less

  15. A Randomized Parallel Study for Simulated Internal Jugular Vein Cannulation Using Simple Needle Guide Device

    ClinicalTrials.gov

    2017-08-14

    Doctors Attending a Central Line Insertion Training Courses for New Residents of a University Hospital From March 2017 to June 2017; Physicians Who Had Less Than 10 Ultrasound Guided Internal Jugular Vein Cannulation Participate in This Study

  16. Postdural puncture headache is not an age-related symptom in children: a prospective, open-randomized, parallel group study comparing a22-gauge Quincke with a 22-gauge Whitacre needle.

    PubMed

    Kokki, H; Salonvaara, M; Herrgård, E; Onen, P

    1999-01-01

    Many reports have shown a low incidence of postdural puncture headache (PDPH) and other complaints in young children. The objective of this open-randomized, prospective, parallel group study was to compare the use of a cutting point spinal needle (22-G Quincke) with a pencil point spinal needle (22-G Whitacre) in children. We studied the puncture characteristics, success rate and incidence of postpuncture complaints in 57 children, aged 8 months to 15 years, following 98 lumbar punctures (LP). The patient/parents completed a diary at 3 and 7 days after LP. The response rate was 97%. The incidence of PDPH was similar, 15% in the Quincke group and 9% in the Whitacre group (P=0.42). The risk of developing a PDPH was not dependent on the age (r < 0.00, P=0.67). Eight of the 11 PDPHs developed in children younger than 10 years, the youngest being 23-months-old.

  17. The effective propagation constants of SH wave in composites reinforced by dispersive parallel nanofibers

    NASA Astrophysics Data System (ADS)

    Qiang, FangWei; Wei, PeiJun; Li, Li

    2012-07-01

    In the present paper, the effective propagation constants of elastic SH waves in composites with randomly distributed parallel cylindrical nanofibers are studied. The surface stress effects are considered based on the surface elasticity theory and non-classical interfacial conditions between the nanofiber and the host are derived. The scattering waves from individual nanofibers embedded in an infinite elastic host are obtained by the plane wave expansion method. The scattering waves from all fibers are summed up to obtain the multiple scattering waves. The interactions among random dispersive nanofibers are taken into account by the effective field approximation. The effective propagation constants are obtained by the configurational average of the multiple scattering waves. The effective speed and attenuation of the averaged wave and the associated dynamical effective shear modulus of composites are numerically calculated. Based on the numerical results, the size effects of the nanofibers on the effective propagation constants and the effective modulus are discussed.

  18. Magnetic orientation of nontronite clay in aqueous dispersions and its effect on water diffusion.

    PubMed

    Abrahamsson, Christoffer; Nordstierna, Lars; Nordin, Matias; Dvinskikh, Sergey V; Nydén, Magnus

    2015-01-01

    The diffusion rate of water in dilute clay dispersions depends on particle concentration, size, shape, aggregation and water-particle interactions. As nontronite clay particles magnetically align parallel to the magnetic field, directional self-diffusion anisotropy can be created within such dispersion. Here we study water diffusion in exfoliated nontronite clay dispersions by diffusion NMR and time-dependant 1H-NMR-imaging profiles. The dispersion clay concentration was varied between 0.3 and 0.7 vol%. After magnetic alignment of the clay particles in these dispersions a maximum difference of 20% was measured between the parallel and perpendicular self-diffusion coefficients in the dispersion with 0.7 vol% clay. A method was developed to measure water diffusion within the dispersion in the absence of a magnetic field (random clay orientation) as this is not possible with standard diffusion NMR. However, no significant difference in self-diffusion coefficient between random and aligned dispersions could be observed. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Symposium on Parallel Computational Methods for Large-scale Structural Analysis and Design, 2nd, Norfolk, VA, US

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)

    1993-01-01

    Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.

  20. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

Top