Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School
NASA Astrophysics Data System (ADS)
Lafage, Vincent
2017-11-01
Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.
ERIC Educational Resources Information Center
Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark
2010-01-01
Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…
ERIC Educational Resources Information Center
Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne
2014-01-01
Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This "preliminary" randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive…
Kokki, H; Salonvaara, M; Herrgård, E; Onen, P
1999-01-01
Many reports have shown a low incidence of postdural puncture headache (PDPH) and other complaints in young children. The objective of this open-randomized, prospective, parallel group study was to compare the use of a cutting point spinal needle (22-G Quincke) with a pencil point spinal needle (22-G Whitacre) in children. We studied the puncture characteristics, success rate and incidence of postpuncture complaints in 57 children, aged 8 months to 15 years, following 98 lumbar punctures (LP). The patient/parents completed a diary at 3 and 7 days after LP. The response rate was 97%. The incidence of PDPH was similar, 15% in the Quincke group and 9% in the Whitacre group (P=0.42). The risk of developing a PDPH was not dependent on the age (r < 0.00, P=0.67). Eight of the 11 PDPHs developed in children younger than 10 years, the youngest being 23-months-old.
Deuse, Tobias; Bara, Christoph; Barten, Markus J; Hirt, Stephan W; Doesch, Andreas O; Knosalla, Christoph; Grinninger, Carola; Stypmann, Jörg; Garbade, Jens; Wimmer, Peter; May, Christoph; Porstner, Martina; Schulz, Uwe
2015-11-01
In recent years a series of trials has sought to define the optimal protocol for everolimus-based immunosuppression in heart transplantation, with the goal of minimizing exposure to calcineurin inhibitors (CNIs) and harnessing the non-immunosuppressive benefits of everolimus. Randomized studies have demonstrated that immunosuppressive potency can be maintained in heart transplant patients receiving everolimus despite marked CNI reduction, although very early CNI withdrawal may be inadvisable. A potential renal advantage has been shown for everolimus, but the optimal time for conversion and the adequate reduction in CNI exposure remain to be defined. Other reasons for use of everolimus include a substantial reduction in the risk of cytomegalovirus infection, and evidence for inhibition of cardiac allograft vasculopathy, a major cause of graft loss. The ongoing MANDELA study is a 12-month multicenter, randomized, open-label, parallel-group study in which efficacy, renal function and safety are compared in approximately 200 heart transplant patients. Patients receive CNI therapy, steroids and everolimus or mycophenolic acid during months 3 to 6 post-transplant, and are then randomized at month 6 post-transplant (i) to convert to CNI-free immunosuppression with everolimus and mycophenolic acid or (ii) to continue reduced-exposure CNI, with concomitant everolimus. Patients are then followed to month 18 post-transplant The rationale and expectations for the trial and its methodology are described herein. Copyright © 2015 Elsevier Inc. All rights reserved.
Yotebieng, Marcel; Behets, Frieda; Kawende, Bienvenu; Ravelomanana, Noro Lantoniaina Rosa; Tabala, Martine; Okitolonda, Emile W
2017-04-26
Despite the rapid adoption of the World Health Organization's 2013 guidelines, children continue to be infected with HIV perinatally because of sub-optimal adherence to the continuum of HIV care in maternal and child health (MCH) clinics. To achieve the UNAIDS goal of eliminating mother-to-child HIV transmission, multiple, adaptive interventions need to be implemented to improve adherence to the HIV continuum. The aim of this open label, parallel, group randomized trial is to evaluate the effectiveness of Continuous Quality Improvement (CQI) interventions implemented at facility and health district levels to improve retention in care and virological suppression through 24 months postpartum among pregnant and breastfeeding women receiving ART in MCH clinics in Kinshasa, Democratic Republic of Congo. Prior to randomization, the current monitoring and evaluation system will be strengthened to enable collection of high quality individual patient-level data necessary for timely indicators production and program outcomes monitoring to inform CQI interventions. Following randomization, in health districts randomized to CQI, quality improvement (QI) teams will be established at the district level and at MCH clinics level. For 18 months, QI teams will be brought together quarterly to identify key bottlenecks in the care delivery system using data from the monitoring system, develop an action plan to address those bottlenecks, and implement the action plan at the level of their district or clinics. If proven to be effective, CQI as designed here, could be scaled up rapidly in resource-scarce settings to accelerate progress towards the goal of an AIDS free generation. The protocol was retrospectively registered on February 7, 2017. ClinicalTrials.gov Identifier: NCT03048669 .
Nipanikar, Sanjay U; Gajare, Kamalakar V; Vaidya, Vidyadhar G; Kamthe, Amol B; Upasani, Sachin A; Kumbhar, Vidyadhar S
2017-01-01
The main objective of the present study was to assess efficacy and safety of AHPL/AYTOP/0113 cream, a polyherbal formulation in comparison with Framycetin sulphate cream in acute wounds. It was an open label, randomized, comparative, parallel group and multi-center clinical study. Total 47 subjects were randomly assigned to Group-A (AHPL/AYTOP/0113 cream) and 42 subjects were randomly assigned to Group-B (Framycetin sulphate cream). All the subjects were advised to apply study drug, thrice daily for 21 days or up to complete wound healing (whichever was earlier). All the subjects were called for follow up on days 2, 4, 7, 10, 14, 17 and 21 or up to the day of complete wound healing. Data describing quantitative measures are expressed as mean ± SD. Comparison of variables representing categorical data was performed using Chi-square test. Group-A subjects took significantly less ( P < 0.05) i.e., (mean) 7.77 days than (mean) 9.87 days of Group-B subjects for wound healing. At the end of the study, statistically significant better ( P < 0.05) results were observed in Group-A than Group-B in mean wound surface area, wound healing parameters and pain associated with wound. Excellent overall efficacy and tolerability was observed in subjects of both the groups. No adverse event or adverse drug reaction was noted in any subject of both the groups. AHPL/AYTOP/0113 cream proved to be superior to Framycetin sulphate cream in healing of acute wounds.
[Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].
Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan
2016-03-01
To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.
NASA Technical Reports Server (NTRS)
Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele
2004-01-01
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Thunström, Erik; Manhem, Karin; Yucel-Lindberg, Tülay; Rosengren, Annika; Lindberg, Caroline; Peker, Yüksel
2016-11-01
Blood pressure reduction in response to antihypertensive agents is less for patients with obstructive sleep apnea (OSA). Increased sympathetic and inflammatory activity, as well as alterations in the renin-angiotensin-aldosterone system, may play a role in this context. To address the cardiovascular mechanisms involved in response to an angiotensin II receptor antagonist, losartan, and continuous positive airway pressure (CPAP) as add-on treatment for hypertension and OSA. Newly diagnosed hypertensive patients with or without OSA (allocated in a 2:1 ratio for OSA vs. no OSA) were treated with losartan 50 mg daily during a 6-week two-center, open-label, prospective, case-control, parallel-design study. In the second 6-week, sex-stratified, open-label, randomized, parallel-design study, all subjects with OSA continued to receive losartan and were randomly assigned to either CPAP as add-on therapy or to no CPAP (1:1 ratio for CPAP vs. no CPAP). Study subjects without OSA were followed in parallel while they continued to take losartan. Blood samples were collected at baseline, after 6 weeks, and after 12 weeks for analysis of renin, aldosterone, noradrenaline, adrenaline, and inflammatory markers. Fifty-four patients with OSA and 35 without OSA were included in the first 6-week study. Losartan significantly increased renin levels and reduced aldosterone levels in the group without OSA. There was no significant decrease in aldosterone levels among patients with OSA. Add-on CPAP treatment tended to lower aldosterone levels, but reductions were more pronounced in measures of sympathetic activity. No significant changes in inflammatory markers were observed following treatment with losartan and CPAP. Hypertensive patients with OSA responded to losartan treatment with smaller reductions in aldosterone compared with hypertensive patients without OSA. Sympathetic system activity seemed to respond primarily to add-on CPAP treatment in patients with newly discovered hypertension and OSA. Clinical trial registered with www.clinicaltrials.gov (NCT00701428).
Chandramohan, S M; Gajbhiye, Raj Narenda; Agwarwal, Anil; Creedon, Erin; Schwiers, Michael L; Waggoner, Jason R; Tatla, Daljit
2013-08-01
Although stapling is an alternative to hand-suturing in gastrointestinal surgery, recent trials specifically designed to evaluate differences between the two in surgery time, anastomosis time, and return to bowel activity are lacking. This trial compared the outcomes of the two in subjects undergoing open gastrointestinal surgery. Adult subjects undergoing emergency or elective surgery requiring a single gastric, small, or large bowel anastomosis were enrolled into this open-label, prospective, randomized, interventional, parallel, multicenter, controlled trial. Randomization was assigned in a 1:1 ratio between the hand-sutured group (n = 138) and the stapled group (n = 142). Anastomosis time, surgery time, and time to bowel activity were collected and compared as primary endpoints. A total of 280 subjects were enrolled from April 2009 to September 2010. Only the time of anastomosis was significantly different between the two arms: 17.6 ± 1.90 min (stapled) and 20.6 ± 1.90 min (hand-sutured). This difference was deemed not clinically or economically meaningful. Safety outcomes and other secondary endpoints were similar between the two arms. Mechanical stapling is faster than hand-suturing for the construction of gastrointestinal anastomoses. Apart from this, stapling and hand-suturing are similar with respect to the outcomes measured in this trial.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.
The OpenMP Implementation of NAS Parallel Benchmarks and its Performance
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry
1999-01-01
As the new ccNUMA architecture became popular in recent years, parallel programming with compiler directives on these machines has evolved to accommodate new needs. In this study, we examine the effectiveness of OpenMP directives for parallelizing the NAS Parallel Benchmarks. Implementation details will be discussed and performance will be compared with the MPI implementation. We have demonstrated that OpenMP can achieve very good results for parallelization on a shared memory system, but effective use of memory and cache is very important.
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang
2017-10-01
The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.
Tay, Lee; Leon, Francisco; Vratsanos, George; Raymond, Ralph; Corbo, Michael
2007-01-01
The effect of abatacept, a selective T-cell co-stimulation modulator, on vaccination has not been previously investigated. In this open-label, single-dose, randomized, parallel-group, controlled study, the effect of a single 750 mg infusion of abatacept on the antibody response to the intramuscular tetanus toxoid vaccine (primarily a memory response to a T-cell-dependent peptide antigen) and the intramuscular 23-valent pneumococcal vaccine (a less T-cell-dependent response to a polysaccharide antigen) was measured in 80 normal healthy volunteers. Subjects were uniformly randomized to receive one of four treatments: Group A (control group), subjects received vaccines on day 1 only; Group B, subjects received vaccines 2 weeks before abatacept; Group C, subjects received vaccines 2 weeks after abatacept; and Group D, subjects received vaccines 8 weeks after abatacept. Anti-tetanus and anti-pneumococcal (Danish serotypes 2, 6B, 8, 9V, 14, 19F and 23F) antibody titers were measured 14 and 28 days after vaccination. While there were no statistically significant differences between the dosing groups, geometric mean titers following tetanus or pneumococcal vaccination were generally lower in subjects who were vaccinated 2 weeks after receiving abatacept, compared with control subjects. A positive response (defined as a twofold increase in antibody titer from baseline) to tetanus vaccination at 28 days was seen, however, in ≥ 60% of subjects across all treatment groups versus 75% of control subjects. Similarly, over 70% of abatacept-treated subjects versus all control subjects (100%) responded to at least three pneumococcal serotypes, and approximately 25–30% of abatacept-treated subjects versus 45% of control subjects responded to at least six serotypes. PMID:17425783
Katakami, Naoto; Mita, Tomoya; Yoshii, Hidenori; Shiraiwa, Toshihiko; Yasuda, Tetsuyuki; Okada, Yosuke; Umayahara, Yutaka; Kaneto, Hideaki; Osonoi, Takeshi; Yamamoto, Tsunehiko; Kuribayashi, Nobuichi; Maeda, Kazuhisa; Yokoyama, Hiroki; Kosugi, Keisuke; Ohtoshi, Kentaro; Hayashi, Isao; Sumitani, Satoru; Tsugawa, Mamiko; Ohashi, Makoto; Taki, Hideki; Nakamura, Tadashi; Kawashima, Satoshi; Sato, Yasunori; Watada, Hirotaka; Shimomura, Iichiro
2017-10-01
Sodium-glucose co-transporter-2 (SGLT2) inhibitors are anti-diabetic agents that improve glycemic control with a low risk of hypoglycemia and ameliorate a variety of cardiovascular risk factors. The aim of the ongoing study described herein is to investigate the preventive effects of tofogliflozin, a potent and selective SGLT2 inhibitor, on the progression of atherosclerosis in subjects with type 2 diabetes (T2DM) using carotid intima-media thickness (IMT), an established marker of cardiovascular disease (CVD), as a marker. The Study of Using Tofogliflozin for Possible better Intervention against Atherosclerosis for type 2 diabetes patients (UTOPIA) trial is a prospective, randomized, open-label, blinded-endpoint, multicenter, and parallel-group comparative study. The aim was to recruit a total of 340 subjects with T2DM but no history of apparent CVD at 24 clinical sites and randomly allocate these to a tofogliflozin treatment group or a conventional treatment group using drugs other than SGLT2 inhibitors. As primary outcomes, changes in mean and maximum IMT of the common carotid artery during a 104-week treatment period will be measured by carotid echography. Secondary outcomes include changes in glycemic control, parameters related to β-cell function and diabetic nephropathy, the occurrence of CVD and adverse events, and biochemical measurements reflecting vascular function. This is the first study to address the effects of SGLT2 inhibitors on the progression of carotid IMT in subjects with T2DM without a history of CVD. The results will be available in the very near future, and these findings are expected to provide clinical data that will be helpful in the prevention of diabetic atherosclerosis and subsequent CVD. Kowa Co., Ltd. UMIN000017607.
Whole body vibration for older persons: an open randomized, multicentre, parallel, clinical trial
2011-01-01
Background Institutionalized older persons have a poor functional capacity. Including physical exercise in their routine activities decreases their frailty and improves their quality of life. Whole-body vibration (WBV) training is a type of exercise that seems beneficial in frail older persons to improve their functional mobility, but the evidence is inconclusive. This trial will compare the results of exercise with WBV and exercise without WBV in improving body balance, muscle performance and fall prevention in institutionalized older persons. Methods/Design An open, multicentre and parallel randomized clinical trial with blinded assessment. 160 nursing home residents aged over 65 years and of both sexes will be identified to participate in the study. Participants will be centrally randomised and allocated to interventions (vibration or exercise group) by telephone. The vibration group will perform static/dynamic exercises (balance and resistance training) on a vibratory platform (Frequency: 30-35 Hz; Amplitude: 2-4 mm) over a six-week training period (3 sessions/week). The exercise group will perform the same exercise protocol but without a vibration stimuli platform. The primary outcome measure is the static/dynamic body balance. Secondary outcomes are muscle strength and, number of new falls. Follow-up measurements will be collected at 6 weeks and at 6 months after randomization. Efficacy will be analysed on an intention-to-treat (ITT) basis and 'per protocol'. The effects of the intervention will be evaluated using the "t" test, Mann-Witney test, or Chi-square test, depending on the type of outcome. The final analysis will be performed 6 weeks and 6 months after randomization. Discussion This study will help to clarify whether WBV training improves body balance, gait mobility and muscle strength in frail older persons living in nursing homes. As far as we know, this will be the first study to evaluate the efficacy of WBV for the prevention of falls. Trial Registration ClinicalTrials.gov: NCT01375790 PMID:22192313
van der Sluis, Pieter C; Ruurda, Jelle P; van der Horst, Sylvia; Verhage, Roy J J; Besselink, Marc G H; Prins, Margriet J D; Haverkamp, Leonie; Schippers, Carlo; Rinkes, Inne H M Borel; Joore, Hans C A; Ten Kate, Fiebo Jw; Koffijberg, Hendrik; Kroese, Christiaan C; van Leeuwen, Maarten S; Lolkema, Martijn P J K; Reerink, Onne; Schipper, Marguerite E I; Steenhagen, Elles; Vleggaar, Frank P; Voest, Emile E; Siersema, Peter D; van Hillegersberg, Richard
2012-11-30
For esophageal cancer patients, radical esophagolymphadenectomy is the cornerstone of multimodality treatment with curative intent. Transthoracic esophagectomy is the preferred surgical approach worldwide allowing for en-bloc resection of the tumor with the surrounding lymph nodes. However, the percentage of cardiopulmonary complications associated with the transthoracic approach is high (50 to 70%).Recent studies have shown that robot-assisted minimally invasive thoraco-laparoscopic esophagectomy (RATE) is at least equivalent to the open transthoracic approach for esophageal cancer in terms of short-term oncological outcomes. RATE was accompanied with reduced blood loss, shorter ICU stay and improved lymph node retrieval compared with open esophagectomy, and the pulmonary complication rate, hospital stay and perioperative mortality were comparable. The objective is to evaluate the efficacy, risks, quality of life and cost-effectiveness of RATE as an alternative to open transthoracic esophagectomy for treatment of esophageal cancer. This is an investigator-initiated and investigator-driven monocenter randomized controlled parallel-group, superiority trial. All adult patients (age ≥ 18 and ≤ 80 years) with histologically proven, surgically resectable (cT1-4a, N0-3, M0) esophageal carcinoma of the intrathoracic esophagus and with European Clinical Oncology Group performance status 0, 1 or 2 will be assessed for eligibility and included after obtaining informed consent. Patients (n = 112) with resectable esophageal cancer are randomized in the outpatient department to either RATE (n = 56) or open three-stage transthoracic esophageal resection (n = 56). The primary outcome of this study is the percentage of overall complications (grade 2 and higher) as stated by the modified Clavien-Dindo classification of surgical complications. This is the first randomized controlled trial designed to compare RATE with open transthoracic esophagectomy as surgical treatment for resectable esophageal cancer. If our hypothesis is proven correct, RATE will result in a lower percentage of postoperative complications, lower blood loss, and shorter hospital stay, but with at least similar oncologic outcomes and better postoperative quality of life compared with open transthoracic esophagectomy. The study started in January 2012. Follow-up will be 5 years. Short-term results will be analyzed and published after discharge of the last randomized patient. Dutch trial register: NTR3291 ClinicalTrial.gov: NCT01544790.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores
NASA Astrophysics Data System (ADS)
Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei
We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.
Portable multi-node LQCD Monte Carlo simulations using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.
Computer-Aided Parallelizer and Optimizer
NASA Technical Reports Server (NTRS)
Jin, Haoqiang
2011-01-01
The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.
Weinreb, Robert N; Liebmann, Jeffrey M; Martin, Keith R; Kaufman, Paul L; Vittitow, Jason L
2018-01-01
To compare the diurnal intraocular pressure (IOP)-lowering effect of latanoprostene bunod (LBN) 0.024% with timolol maleate 0.5% in subjects with open-angle glaucoma (OAG) or ocular hypertension (OHT). Pooled analysis of two phase 3, randomized, multicenter, double-masked, parallel-group, noninferiority trials (APOLLO and LUNAR), each with open-label safety extension phases. Adults with OAG or OHT were randomized 2:1 to double-masked treatment with LBN once daily (qd) or timolol twice daily (bid) for 3 months followed by open-label LBN treatment for 3 (LUNAR) or 9 (APOLLO) months. IOP was measured at 8 AM, 12 PM, and 4 PM at week 2, week 6, and months 3, 6, 9, and 12. Of the 840 subjects randomized, 774 (LBN, n=523; timolol crossover to LBN, n=251) completed the efficacy phase, and 738 completed the safety extension phase. Mean IOP was significantly lower with LBN versus timolol at all 9 evaluation timepoints during the efficacy phase (P<0.001). A significantly greater proportion of LBN-treated subjects attained a mean IOP ≤18 mm Hg and IOP reduction ≥25% from baseline versus timolol-treated subjects (P<0.001). The IOP reduction with LBN was sustained through the safety phase; subjects crossed over from timolol to LBN experienced additional significant IOP lowering (P≤0.009). Both treatments were well tolerated, and there were no safety concerns with long-term LBN treatment. In this pooled analysis of subjects with OAG and OHT, LBN 0.024% qd provided greater IOP-lowering compared with timolol 0.5% bid and maintained lowered IOP through 12 months. LBN demonstrated a safety profile comparable to that of prostaglandin analogs.
Morales-Fernandez, Angeles; Morales-Asencio, Jose Miguel; Canca-Sanchez, Jose Carlos; Moreno-Martin, Gabriel; Vergara-Romero, Manuel
2016-05-01
To determine the effect of a nurse-led intervention programme for patients with chronic non-cancer pain. Chronic non-cancer pain is a widespread health problem and one that is insufficiently controlled. Nurses can play a vital role in pain management, using best practices in the assessment and management of pain under a holistic approach where the patient plays a proactive role in addressing the disease process. Improving the quality of life, reducing disability, achieving acceptance of health status, coping and breaking the vicious circle of pain should be the prime objectives of our care management programme. Open randomized parallel controlled study. The experimental group will undertake one single initial session, followed by six group sessions led by nurses, aimed at empowering patients for the self-management of pain. Healthy behaviours will be encouraged, such as sleep and postural hygiene, promotion of physical activity and healthy eating. Educational interventions on self-esteem, pain-awareness, communication and relaxing techniques will be carried out. As primary end points, quality of life, perceived level of pain, anxiety and depression will be evaluated. Secondary end points will be coping and satisfaction. Follow-up will be performed at 12 and 24 weeks. The study was approved by the Ethics and Research Committee Costa del Sol. If significant effects were detected, impact on quality of life through a nurse-led programme would offer a complementary service to existing pain clinics for a group of patients with frequent unmet needs. © 2016 John Wiley & Sons Ltd.
An OpenACC-Based Unified Programming Model for Multi-accelerator Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S
2015-01-01
This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.
Characterizing and Mitigating Work Time Inflation in Task Parallel Programs
Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...
2013-01-01
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.
Three is much more than two in coarsening dynamics of cyclic competitions
NASA Astrophysics Data System (ADS)
Mitarai, Namiko; Gunnarson, Ivar; Pedersen, Buster Niels; Rosiek, Christian Anker; Sneppen, Kim
2016-04-01
The classical game of rock-paper-scissors has inspired experiments and spatial model systems that address the robustness of biological diversity. In particular, the game nicely illustrates that cyclic interactions allow multiple strategies to coexist for long-time intervals. When formulated in terms of a one-dimensional cellular automata, the spatial distribution of strategies exhibits coarsening with algebraically growing domain size over time, while the two-dimensional version allows domains to break and thereby opens the possibility for long-time coexistence. We consider a quasi-one-dimensional implementation of the cyclic competition, and study the long-term dynamics as a function of rare invasions between parallel linear ecosystems. We find that increasing the complexity from two to three parallel subsystems allows a transition from complete coarsening to an active steady state where the domain size stays finite. We further find that this transition happens irrespective of whether the update is done in parallel for all sites simultaneously or done randomly in sequential order. In both cases, the active state is characterized by localized bursts of dislocations, followed by longer periods of coarsening. In the case of the parallel dynamics, we find that there is another phase transition between the active steady state and the coarsening state within the three-line system when the invasion rate between the subsystems is varied. We identify the critical parameter for this transition and show that the density of active boundaries has critical exponents that are consistent with the directed percolation universality class. On the other hand, numerical simulations with the random sequential dynamics suggest that the system may exhibit an active steady state as long as the invasion rate is finite.
Hosomi, Naohisa; Nagai, Yoji; Kohriyama, Tatsuo; Ohtsuki, Toshiho; Aoki, Shiro; Nezu, Tomohisa; Maruyama, Hirofumi; Sunami, Norio; Yokota, Chiaki; Kitagawa, Kazuo; Terayama, Yasuo; Takagi, Makoto; Ibayashi, Setsuro; Nakamura, Masakazu; Origasa, Hideki; Fukushima, Masanori; Mori, Etsuro; Minematsu, Kazuo; Uchiyama, Shinichiro; Shinohara, Yukito; Yamaguchi, Takenori; Matsumoto, Masayasu
2015-09-01
Although statin therapy is beneficial for the prevention of initial stroke, the benefit for recurrent stroke and its subtypes remains to be determined in Asian, in whom stroke profiles are different from Caucasian. This study examined whether treatment with low-dose pravastatin prevents stroke recurrence in ischemic stroke patients. This is a multicenter, randomized, open-label, blinded-endpoint, parallel-group study of patients who experienced non-cardioembolic ischemic stroke. All patients had a total cholesterol level between 4.65 and 6.21 mmol/L at enrollment, without the use of statins. The pravastatin group patients received 10 mg of pravastatin/day; the control group patients received no statins. The primary endpoint was the occurrence of stroke and transient ischemic attack (TIA), with the onset of each stroke subtype set to be one of the secondary endpoints. Although 3000 patients were targeted, 1578 patients (491 female, age 66.2 years) were recruited and randomly assigned to pravastatin group or control group. During the follow-up of 4.9 ± 1.4 years, although total stroke and TIA similarly occurred in both groups (2.56 vs. 2.65%/year), onset of atherothrombotic infarction was less frequent in pravastatin group (0.21 vs. 0.64%/year, p = 0.0047, adjusted hazard ratio 0.33 [95%CI 0.15 to 0.74]). No significant intergroup difference was found for the onset of other stroke subtypes, and for the occurrence of adverse events. Although whether low-dose pravastatin prevents recurrence of total stroke or TIA still needs to be examined in Asian, this study has generated a hypothesis that it may reduce occurrence of stroke due to larger artery atherosclerosis. This study was initially supported by a grant from the Ministry of Health, Labour and Welfare, Japan. After the governmental support expired, it was conducted in collaboration between Hiroshima University and the Foundation for Biomedical Research and Innovation.
Gnessi, Lucio; Bacarea, Vladimir; Marusteri, Marius; Piqué, Núria
2015-10-30
There is a strong rationale for the use of agents with film-forming protective properties, like xyloglucan, for the treatment of acute diarrhea. However, few data from clinical trials are available. A randomized, controlled, open-label, parallel group, multicentre, clinical trial was performed to evaluate the efficacy and safety of xyloglucan, in comparison with diosmectite and Saccharomyces in adult patients with acute diarrhea due to different causes. Patients were randomized to receive a 3-day treatment. Symptoms (stools type, nausea, vomiting, abdominal pain and flatulence) were assessed by a self-administered ad-hoc questionnaire 1, 3, 6, 12, 24, 48 and 72 h following the first dose administration. Adverse events were also recorded. A total of 150 patients (69.3 % women and 30.7 % men, mean age 47.3 ± 14.7 years) were included (n = 50 in each group). A faster onset of action was observed in the xyloglucan group compared with the diosmectite and S. bouliardii groups. At 6 h xyloglucan produced a statistically significant higher decrease in the mean number of type 6 and 7 stools compared with diosmectite (p = 0.031). Xyloglucan was the most efficient treatment in reducing the percentage of patients with nausea throughout the study period, particularly during the first hours (from 26 % at baseline to 4 % after 6 and 12 h). An important improvement of vomiting was observed in all three treatment groups. Xyloglucan was more effective than diosmectite and S. bouliardii in reducing abdominal pain, with a constant improvement observed throughout the study. The clinical evolution of flatulence followed similar patterns in the three groups, with continuous improvement of the symptom. All treatments were well tolerated, without reported adverse events. Xyloglucan is a fast, efficacious and safe option for the treatment of acute diarrhea. EudraCT number 2014-001814-24 (date: 2014-04-28) ISRCTN number: 90311828.
Rossignol, Patrick; Dorval, Marc; Fay, Renaud; Ros, Joan Fort; Loughraieb, Nathalie; Moureau, Frédérique; Laville, Maurice
2013-06-01
Anticoagulation for chronic dialysis patients with contraindications to heparin administration is challenging. Current guidelines state that in patients with increased bleeding risks, strategies that can induce systemic anticoagulation should be avoided. Heparin-free dialysis using intermittent saline flushes is widely adopted as the method of choice for patients at risk of bleeding, although on-line blood predilution may also be used. A new dialyzer, Evodial (Gambro, Lund, Sweden), is grafted with unfractionated heparin during the manufacturing process and may allow safe and efficient heparin-free hemodialysis sessions. In the present trial, Evodial was compared to standard care with either saline flushes or blood predilution. The HepZero study is the first international (seven countries), multicenter (10 centers), randomized, controlled, open-label, non-inferiority (and if applicable subsequently, superiority) trial with two parallel groups, comprising 252 end-stage renal disease patients treated by maintenance hemodialysis for at least 3 months and requiring heparin-free dialysis treatments. Patients will be treated during a maximum of three heparin-free dialysis treatments with either saline flushes or blood predilution (control group), or Evodial. The first heparin-free dialysis treatment will be considered successful when there is: no complete occlusion of air traps or dialyzer rendering dialysis impossible; no additional saline flushes to prevent clotting; no change of dialyzer or blood lines because of clotting; and no premature termination (early rinse-back) because of clotting.The primary objectives of the study are to determine the effectiveness of the Evodial dialyzer, compared with standard care in terms of successful treatments during the first heparin-free dialysis. If the non-inferiority of Evodial is demonstrated then the superiority of Evodial over standard care will be tested. The HepZero study results may have major clinical implications for patient care. ClinicalTrials.gov NCT01318486.
Yang, Wenying; Zhu, Lvyun; Meng, Bangzhu; Liu, Yu; Wang, Wenhui; Ye, Shandong; Sun, Li; Miao, Heng; Guo, Lian; Wang, Zhanjian; Lv, Xiaofeng; Li, Quanmin; Ji, Qiuhe; Zhao, Weigang; Yang, Gangyi
2016-01-01
The present study was to compare the efficacy and safety of subject-driven and investigator-driven titration of biphasic insulin aspart 30 (BIAsp 30) twice daily (BID). In this 20-week, randomized, open-label, two-group parallel, multicenter trial, Chinese patients with type 2 diabetes inadequately controlled by premixed/self-mixed human insulin were randomized 1:1 to subject-driven or investigator-driven titration of BIAsp 30 BID, in combination with metformin and/or α-glucosidase inhibitors. Dose adjustment was decided by patients in the subject-driven group after training, and by investigators in the investigator-driven group. Eligible adults (n = 344) were randomized in the study. The estimated glycated hemoglobin (HbA1c) reduction was 14.5 mmol/mol (1.33%) in the subject-driven group and 14.3 mmol/mol (1.31%) in the investigator-driven group. Non-inferiority of subject-titration vs investigator-titration in reducing HbA1c was confirmed, with estimated treatment difference -0.26 mmol/mol (95% confidence interval -2.05, 1.53) (-0.02%, 95% confidence interval -0.19, 0.14). Fasting plasma glucose, postprandial glucose increment and self-measured plasma glucose were improved in both groups without statistically significant differences. One severe hypoglycemic event was experienced by one subject in each group. A similar rate of nocturnal hypoglycemia (events/patient-year) was reported in the subject-driven (1.10) and investigator-driven (1.32) groups. There were 64.5 and 58.1% patients achieving HbA1c <53.0 mmol/mol (7.0%), and 51.2 and 45.9% patients achieving the HbA1c target without confirmed hypoglycemia throughout the trial in the subject-driven and investigator-driven groups, respectively. Subject-titration of BIAsp 30 BID was as efficacious and well-tolerated as investigator-titration. The present study supported patients to self-titrate BIAsp 30 BID under physicians' supervision.
Ferrando, Carlos; Suarez-Sipmann, Fernando; Tusman, Gerardo; León, Irene; Romero, Esther; Gracia, Estefania; Mugarra, Ana; Arocas, Blanca; Pozo, Natividad; Soro, Marina; Belda, Francisco J
2017-01-01
Low tidal volume (VT) during anesthesia minimizes lung injury but may be associated to a decrease in functional lung volume impairing lung mechanics and efficiency. Lung recruitment (RM) can restore lung volume but this may critically depend on the post-RM selected PEEP. This study was a randomized, two parallel arm, open study whose primary outcome was to compare the effects on driving pressure of adding a RM to low-VT ventilation, with or without an individualized post-RM PEEP in patients without known previous lung disease during anesthesia. Consecutive patients scheduled for major abdominal surgery were submitted to low-VT ventilation (6 ml·kg-1) and standard PEEP of 5 cmH2O (pre-RM, n = 36). After 30 min estabilization all patients received a RM and were randomly allocated to either continue with the same PEEP (RM-5 group, n = 18) or to an individualized open-lung PEEP (OL-PEEP) (Open Lung Approach, OLA group, n = 18) defined as the level resulting in maximal Cdyn during a decremental PEEP trial. We compared the effects on driving pressure and lung efficiency measured by volumetric capnography. OL-PEEP was found at 8±2 cmH2O. 36 patients were included in the final analysis. When compared with pre-RM, OLA resulted in a 22% increase in compliance and a 28% decrease in driving pressure when compared to pre-RM. These parameters did not improve in the RM-5. The trend of the DP was significantly different between the OLA and RM-5 groups (p = 0.002). VDalv/VTalv was significantly lower in the OLA group after the RM (p = 0.035). Lung recruitment applied during low-VT ventilation improves driving pressure and lung efficiency only when applied as an open-lung strategy with an individualized PEEP in patients without lung diseases undergoing major abdominal surgery. ClinicalTrials.gov NCT02798133.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems.
Stone, John E; Gohara, David; Shi, Guochun
2010-05-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures.
Multilevel Parallelization of AutoDock 4.2.
Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P
2011-04-28
Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.
The Research of the Parallel Computing Development from the Angle of Cloud Computing
NASA Astrophysics Data System (ADS)
Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun
2017-10-01
Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems
Stone, John E.; Gohara, David; Shi, Guochun
2010-01-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures. PMID:21037981
The openGL visualization of the 2D parallel FDTD algorithm
NASA Astrophysics Data System (ADS)
Walendziuk, Wojciech
2005-02-01
This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.
Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.
Charpentier, Guillaume; Benhamou, Pierre-Yves; Dardari, Dured; Clergeot, Annie; Franc, Sylvia; Schaepelynck-Belicar, Pauline; Catargi, Bogdan; Melki, Vincent; Chaillous, Lucy; Farret, Anne; Bosson, Jean-Luc; Penfornis, Alfred
2011-03-01
To demonstrate that Diabeo software enabling individualized insulin dose adjustments combined with telemedicine support significantly improves HbA(1c) in poorly controlled type 1 diabetic patients. In a six-month open-label parallel-group, multicenter study, adult patients (n = 180) with type 1 diabetes (>1 year), on a basal-bolus insulin regimen (>6 months), with HbA(1c) ≥ 8%, were randomized to usual quarterly follow-up (G1), home use of a smartphone recommending insulin doses with quarterly visits (G2), or use of the smartphone with short teleconsultations every 2 weeks but no visit until point end (G3). Six-month mean HbA(1c) in G3 (8.41 ± 1.04%) was lower than in G1 (9.10 ± 1.16%; P = 0.0019). G2 displayed intermediate results (8.63 ± 1.07%). The Diabeo system gave a 0.91% (0.60; 1.21) improvement in HbA(1c) over controls and a 0.67% (0.35; 0.99) reduction when used without teleconsultation. There was no difference in the frequency of hypoglycemic episodes or in medical time spent for hospital or telephone consultations. However, patients in G1 and G2 spent nearly 5 h more than G3 patients attending hospital visits. The Diabeo system gives a substantial improvement to metabolic control in chronic, poorly controlled type 1 diabetic patients without requiring more medical time and at a lower overall cost for the patient than usual care.
Thunström, Erik; Manhem, Karin; Rosengren, Annika; Peker, Yüksel
2016-02-01
Obstructive sleep apnea (OSA) is common in people with hypertension, particularly resistant hypertension. Treatment with an antihypertensive agent alone is often insufficient to control hypertension in patients with OSA. To determine whether continuous positive airway pressure (CPAP) added to treatment with an antihypertensive agent has an impact on blood pressure (BP) levels. During the initial 6-week, two-center, open, prospective, case-control, parallel-design study (2:1; OSA/no-OSA), all patients began treatment with an angiotensin II receptor antagonist, losartan, 50 mg daily. In the second 6-week, sex-stratified, open, randomized, parallel-design study of the OSA group, all subjects continued to receive losartan and were randomly assigned to either nightly CPAP as add-on therapy or no CPAP. Twenty-four-hour BP monitoring included assessment every 15 minutes during daytime hours and every 20 minutes during the night. Ninety-one patients with untreated hypertension underwent a home sleep study (55 were found to have OSA; 36 were not). Losartan significantly reduced systolic, diastolic, and mean arterial BP in both groups (without OSA: 12.6, 7.2, and 9.0 mm Hg; with OSA: 9.8, 5.7, and 6.1 mm Hg). Add-on CPAP treatment had no significant changes in 24-hour BP values but did reduce nighttime systolic BP by 4.7 mm Hg. All 24-hour BP values were reduced significantly in the 13 patients with OSA who used CPAP at least 4 hours per night. Losartan reduced BP in OSA, but the reductions were less than in no-OSA. Add-on CPAP therapy resulted in no significant changes in 24-hour BP measures except in patients using CPAP efficiently. Clinical trial registered with www.clinicaltrials.gov (NCT00701428).
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
NDL-v2.0: A new version of the numerical differentiation library for parallel architectures
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.
2014-07-01
We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.
Performance evaluation of canny edge detection on a tiled multicore architecture
NASA Astrophysics Data System (ADS)
Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald
2011-01-01
In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.
Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
NASA Technical Reports Server (NTRS)
Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)
2001-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
Chang, Kuo-Tsai
2007-01-01
This paper investigates electrical transient characteristics of a Rosen-type piezoelectric transformer (PT), including maximum voltages, time constants, energy losses and average powers, and their improvements immediately after turning OFF. A parallel resistor connected to both input terminals of the PT is needed to improve the transient characteristics. An equivalent circuit for the PT is first given. Then, an open-circuit voltage, involving a direct current (DC) component and an alternating current (AC) component, and its related energy losses are derived from the equivalent circuit with initial conditions. Moreover, an AC power control system, including a DC-to-AC resonant inverter, a control switch and electronic instruments, is constructed to determine the electrical characteristics of the OFF transient state. Furthermore, the effects of the parallel resistor on the transient characteristics at different parallel resistances are measured. The advantages of adding the parallel resistor also are discussed. From the measured results, the DC time constant is greatly decreased from 9 to 0.04 ms by a 10 k(omega) parallel resistance under open output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.
The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less
Cellular automaton model for molecular traffic jams
NASA Astrophysics Data System (ADS)
Belitsky, V.; Schütz, G. M.
2011-07-01
We consider the time evolution of an exactly solvable cellular automaton with random initial conditions both in the large-scale hydrodynamic limit and on the microscopic level. This model is a version of the totally asymmetric simple exclusion process with sublattice parallel update and thus may serve as a model for studying traffic jams in systems of self-driven particles. We study the emergence of shocks from the microscopic dynamics of the model. In particular, we introduce shock measures whose time evolution we can compute explicitly, both in the thermodynamic limit and for open boundaries where a boundary-induced phase transition driven by the motion of a shock occurs. The motion of the shock, which results from the collective dynamics of the exclusion particles, is a random walk with an internal degree of freedom that determines the jump direction. This type of hopping dynamics is reminiscent of some transport phenomena in biological systems.
Bourbeau, Jean; Casan, Pere; Tognella, Silvia; Haidl, Peter; Texereau, Joëlle B; Kessler, Romain
2016-01-01
Most hospitalizations and costs related to COPD are due to exacerbations and insufficient disease management. The COPD patient Management European Trial (COMET) is investigating a home-based multicomponent COPD self-management program designed to reduce exacerbations and hospital admissions. Multicenter parallel randomized controlled, open-label superiority trial. Thirty-three hospitals in four European countries. A total of 345 patients with Global initiative for chronic Obstructive Lung Disease III/IV COPD. The program includes extensive patient coaching by health care professionals to improve self-management (eg, develop skills to better manage their disease), an e-health platform for reporting frequent health status updates, rapid intervention when necessary, and oxygen therapy monitoring. Comparator is the usual management as per the center's routine practice. Yearly number of hospital days for acute care, exacerbation number, quality of life, deaths, and costs.
Papakostas, George I; Fava, Maurizio; Baer, Lee; Swee, Michaela B; Jaeger, Adrienne; Bobo, William V; Shelton, Richard C
2015-12-01
The authors sought to test the efficacy of adjunctive ziprasidone in adults with nonpsychotic unipolar major depression experiencing persistent symptoms after 8 weeks of open-label treatment with escitalopram. This was an 8-week, randomized, double-blind, parallel-group, placebo-controlled trial conducted at three academic medical centers. Participants were 139 outpatients with persistent symptoms of major depression after an 8-week open-label trial of escitalopram (phase 1), randomly assigned in a 1:1 ratio to receive adjunctive ziprasidone (escitalopram plus ziprasidone, N=71) or adjunctive placebo (escitalopram plus placebo, N=68), with 8 weekly follow-up assessments. The primary outcome measure was clinical response, defined as a reduction of at least 50% in score on the 17-item Hamilton Depression Rating Scale (HAM-D). The Hamilton Anxiety Rating scale (HAM-A) and Visual Analog Scale for Pain were defined a priori as key secondary outcome measures. Rates of clinical response (35.2% compared with 20.5%) and mean improvement in HAM-D total scores (-6.4 [SD=6.4] compared with -3.3 [SD=6.2]) were significantly greater for the escitalopram plus ziprasidone group. Several secondary measures of antidepressant efficacy also favored adjunctive ziprasidone. The escitalopram plus ziprasidone group also showed significantly greater improvement on HAM-A score but not on Visual Analog Scale for Pain score. Ten (14%) patients in the escitalopram plus ziprasidone group discontinued treatment because of intolerance, compared with none in the escitalopram plus placebo group. Ziprasidone as an adjunct to escitalopram demonstrated antidepressant efficacy in adult patients with major depressive disorder experiencing persistent symptoms after 8 weeks of open-label treatment with escitalopram.
The open quantum Brownian motions
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2014-09-01
Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises
Maeda-Yamamoto, Mari; Ema, Kaori; Monobe, Manami; Shibuichi, Ikuo; Shinoda, Yuki; Yamamoto, Tomohiro; Fujisawa, Takao
2009-09-01
We previously reported that 'benifuuki' green tea containing O-methylated catechin significantly relieved the symptoms of perennial or seasonal rhinitis compared with a placebo green tea that did not contain O-methylated catechin in randomized double-blind clinical trials. In this study we assessed the effects of 'benifuuki' green tea on clinical symptoms of seasonal allergic rhinitis. An open-label, single-dose, randomized, parallel-group study was performed on 38 subjects with Japanese cedar pollinosis. The subjects were randomly assigned to long-term (December 27, 2006-April 8, 2007, 1.5 months before pollen exposure) or short-term (February 15, 2007: after cedar pollen dispersal--April 8, 2007) drinking of a 'benifuuki' tea drink containing 34 mg O-methylated catechin per day. Each subject recorded their daily symptom scores in a diary. The primary efficacy variable was the mean weekly nasal symptom medication score during the study period. The nasal symptom medication score in the long-term intake group was significantly lower than that of the short-term intake group at the peak of pollen dispersal. The symptom scores for throat pain, nose-blowing, tears, and hindrance to activities of daily living were significantly better in the long-term group than the short-term group. In particular, the differences in the symptom scores for throat pain and nose-blowing between the 2 groups were marked. We conclude that drinking 'benifuuki' tea for 1.5 months prior to the cedar pollen season is effective in reducing symptom scores for Japanese cedar pollinosis.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs.
Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-06-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆
Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-01-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680
A Sticky Chain Model of the Elongation and Unfolding of Escherichia coli P Pili under Stress
Andersson, Magnus; Fällman, Erik; Uhlin, Bernt Eric; Axner, Ove
2006-01-01
A model of the elongation of P pili expressed by uropathogenic Escherichia coli exposed to stress is presented. The model is based upon the sticky chain concept, which is based upon Hooke's law for elongation of the layer-to-layer and head-to-tail bonds between neighboring units in the PapA rod and a kinetic description of the opening and closing of bonds, described by rate equations and an energy landscape model. It provides an accurate description of the elongation behavior of P pili under stress and supports a hypothesis that the PapA rod shows all three basic stereotypes of elongation/unfolding: elongation of bonds in parallel, the zipper mode of unfolding, and elongation and unfolding of bonds in series. The two first elongation regions are dominated by a cooperative bond opening, in which each bond is influenced by its neighbor, whereas the third region can be described by individual bond opening, in which the bonds open and close randomly. A methodology for a swift extraction of model parameters from force-versus-elongation measurements performed under equilibrium conditions is derived. Entities such as the free energy, the stiffness, the elastic elongation, the opening length of the various bonds, and the number of PapA units in the rod are determined. PMID:16361334
A sticky chain model of the elongation and unfolding of Escherichia coli P pili under stress.
Andersson, Magnus; Fällman, Erik; Uhlin, Bernt Eric; Axner, Ove
2006-03-01
A model of the elongation of P pili expressed by uropathogenic Escherichia coli exposed to stress is presented. The model is based upon the sticky chain concept, which is based upon Hooke's law for elongation of the layer-to-layer and head-to-tail bonds between neighboring units in the PapA rod and a kinetic description of the opening and closing of bonds, described by rate equations and an energy landscape model. It provides an accurate description of the elongation behavior of P pili under stress and supports a hypothesis that the PapA rod shows all three basic stereotypes of elongation/unfolding: elongation of bonds in parallel, the zipper mode of unfolding, and elongation and unfolding of bonds in series. The two first elongation regions are dominated by a cooperative bond opening, in which each bond is influenced by its neighbor, whereas the third region can be described by individual bond opening, in which the bonds open and close randomly. A methodology for a swift extraction of model parameters from force-versus-elongation measurements performed under equilibrium conditions is derived. Entities such as the free energy, the stiffness, the elastic elongation, the opening length of the various bonds, and the number of PapA units in the rod are determined.
Groves, Benjamin; Kuchina, Anna; Rosenberg, Alexander B.; Jojic, Nebojsa; Fields, Stanley; Seelig, Georg
2017-01-01
Our ability to predict protein expression from DNA sequence alone remains poor, reflecting our limited understanding of cis-regulatory grammar and hampering the design of engineered genes for synthetic biology applications. Here, we generate a model that predicts the protein expression of the 5′ untranslated region (UTR) of mRNAs in the yeast Saccharomyces cerevisiae. We constructed a library of half a million 50-nucleotide-long random 5′ UTRs and assayed their activity in a massively parallel growth selection experiment. The resulting data allow us to quantify the impact on protein expression of Kozak sequence composition, upstream open reading frames (uORFs), and secondary structure. We trained a convolutional neural network (CNN) on the random library and showed that it performs well at predicting the protein expression of both a held-out set of the random 5′ UTRs as well as native S. cerevisiae 5′ UTRs. The model additionally was used to computationally evolve highly active 5′ UTRs. We confirmed experimentally that the great majority of the evolved sequences led to higher protein expression rates than the starting sequences, demonstrating the predictive power of this model. PMID:29097404
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
Research of influence of open-winding faults on properties of brushless permanent magnets motor
NASA Astrophysics Data System (ADS)
Bogusz, Piotr; Korkosz, Mariusz; Powrózek, Adam; Prokop, Jan; Wygonik, Piotr
2017-12-01
The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.
Nyström, Thomas; Padro Santos, Irene; Hedberg, Fredric; Wardell, Johan; Witt, Nils; Cao, Yang; Bojö, Leif; Nilsson, Bo; Jendle, Johan
2017-01-01
We aimed to investigate the effect of liraglutide treatment on heart function in type 2 diabetes (T2D) patients with subclinical heart failure. Randomized open parallel-group trial. 62 T2D patients (45 male) with subclinical heart failure were randomized to either once daily liraglutide 1.8 mg, or glimepiride 4 mg, both add on to metformin 1 g twice a day. Mitral annular systolic (s') and early diastolic (e') velocities were measured at rest and during bicycle ergometer exercise, using tissue Doppler echocardiography. The primary endpoint was 18-week treatment changes in longitudinal functional reserve index (LFRI diastolic/systolic ). Clinical characteristics between groups (liraglutide = 33 vs. glimepiride = 29) were well matched. At baseline left ventricle ejection fraction (53.7 vs. 53.6%) and global longitudinal strain (-15.3 vs. -16.5%) did not differ between groups. There were no significant differences in mitral flow velocities between groups. For the primary endpoint, there was no treatment change [95% confidence interval] for: LFRI diastolic (-0.18 vs. -0.53 [-0.28, 2.59; p = 0.19]), or LFRI systolic (-0.10 vs. -0.18 [-1.0, 1.7; p = 0.54]); for the secondary endpoints, there was a significant treatment change in respect of body weight (-3.7 vs. -0.2 kg [-5.5, -1.4; p = 0.001]), waist circumference (-3.1 vs. -0.8 cm [-4.2, -0.4; p = 0.019]), and heart rate (HR) (6.3 vs. -2.3 bpm [-3.0, 14.2; p = 0.003]), with no such treatment change in hemoglobin A1c levels (-11.0 vs. -9.2 mmol/mol [-7.0, 2.6; p = 0.37]), between groups. 18-week treatment of liraglutide compared with glimepiride did not improve LFRI diastolic/systolic , but however increased HR. There was a significant treatment change in body weight reduction in favor for liraglutide treatment.
Amstutz, Alain; Nsakala, Bienvenu Lengo; Vanobberghen, Fiona; Muhairwe, Josephine; Glass, Tracy Renée; Achieng, Beatrice; Sepeka, Mamorena; Tlali, Katleho; Sao, Lebohang; Thin, Kyaw; Klimkait, Thomas; Battegay, Manuel; Labhardt, Niklaus Daniel
2018-02-12
The World Health Organization (WHO) recommends viral load (VL) measurement as the preferred monitoring strategy for HIV-infected individuals on antiretroviral therapy (ART) in resource-limited settings. The new WHO guidelines 2016 continue to define virologic failure as two consecutive VL ≥1000 copies/mL (at least 3 months apart) despite good adherence, triggering switch to second-line therapy. However, the threshold of 1000 copies/mL for defining virologic failure is based on low-quality evidence. Observational studies have shown that individuals with low-level viremia (measurable but below 1000 copies/mL) are at increased risk for accumulation of resistance mutations and subsequent virologic failure. The SESOTHO trial assesses a lower threshold for switch to second-line ART in patients with sustained unsuppressed VL. In this multicenter, parallel-group, open-label, randomized controlled trial conducted in Lesotho, patients on first-line ART with two consecutive unsuppressed VL measurements ≥100 copies/mL, where the second VL is between 100 and 999 copies/mL, will either be switched to second-line ART immediately (intervention group) or not be switched (standard of care, according to WHO guidelines). The primary endpoint is viral resuppression (VL < 50 copies/mL) 9 months after randomization. We will enrol 80 patients, giving us 90% power to detect a difference of 35% in viral resuppression between the groups (assuming two-sided 5% alpha error). For our primary analysis, we will use a modified intention-to-treat set, with those lost to care, death, or crossed over considered failure to resuppress, and using logistic regression models adjusted for the prespecified stratification variables. The SESOTHO trial challenges the current WHO guidelines, assessing an alternative, lower VL threshold for patients with unsuppressed VL on first-line ART. This trial will provide data to inform future WHO guidelines on VL thresholds to recommend switch to second-line ART. ClinicalTrials.gov ( NCT03088241 ), registered May 05, 2017.
Oscillations and chaos in neural networks: an exactly solvable model.
Wang, L P; Pichler, E E; Ross, J
1990-01-01
We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287
Melin, Eva O; Svensson, Ralph; Gustavsson, Sven-Åke; Winberg, Agneta; Denward-Olah, Ewa; Landin-Olsson, Mona; Thulesius, Hans O
2016-04-27
Depression is linked with alexithymia, anxiety, high HbA1c concentrations, disturbances of cortisol secretion, increased prevalence of diabetes complications and all-cause mortality. The psycho-educational method 'affect school with script analysis' and the mind-body therapy 'basic body awareness treatment' will be trialled in patients with diabetes, high HbA1c concentrations and psychological symptoms. The primary outcome measure is change in symptoms of depression. Secondary outcome measures are changes in HbA1c concentrations, midnight salivary cortisol concentration, symptoms of alexithymia, anxiety, self-image measures, use of antidepressants, incidence of diabetes complications and mortality. Two studies will be performed. Study I is an open-labeled parallel-group study with a two-arm randomized controlled trial design. Patients are randomized to either affect school with script analysis or to basic body awareness treatment. According to power calculations, 64 persons are required in each intervention arm at the last follow-up session. Patients with type 1 or type 2 diabetes were recruited from one hospital diabetes outpatient clinic in 2009. The trial will be completed in 2016. Study II is a multicentre open-labeled parallel-group three-arm randomized controlled trial. Patients will be randomized to affect school with script analysis, to basic body awareness treatment, or to treatment as usual. Power calculations show that 70 persons are required in each arm at the last follow-up session. Patients with type 2 diabetes will be recruited from primary care. This study will start in 2016 and finish in 2023. For both studies, the inclusion criteria are: HbA1c concentration ≥62.5 mmol/mol; depression, alexithymia, anxiety or a negative self-image; age 18-59 years; and diabetes duration ≥1 year. The exclusion criteria are pregnancy, severe comorbidities, cognitive deficiencies or inadequate Swedish. Depression, anxiety, alexithymia and self-image are assessed using self-report instruments. HbA1c concentration, midnight salivary cortisol concentration, blood pressure, serum lipid concentrations and anthropometrics are measured. Data are collected from computerized medical records and the Swedish national diabetes and causes of death registers. Whether the "affect school with script analysis" will reduce psychological symptoms, increase emotional awareness and improve diabetes related factors will be tried, and compared to "basic body awareness treatment" and treatment as usual. ClinicalTrials.gov: NCT01714986.
Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark
2010-08-01
In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech restructuring treatment. Post treatment, participants were randomly assigned to 2 trial arms: standard maintenance and standard maintenance plus VSM. Participants in the latter arm viewed stutter-free videos of themselves each day for 1 month. The addition of VSM did not improve speech outcomes, as measured by percent syllables stuttered, at either 1 or 6 months postrandomization. However, at the latter assessment, self-rating of worst stuttering severity by the VSM group was 10% better than that of the control group, and satisfaction with speech fluency was 20% better. Quality of life was also better for the VSM group, which was mildly to moderately impaired compared with moderate impairment in the control group. VSM intervention after treatment was associated with improvements in self-reported outcomes. The clinical implications of this finding are discussed.
Lee, Mi Young; Choi, Dong Seop; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang
2014-01-01
We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528).
Lee, Mi Young; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang
2014-01-01
We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528) PMID:24431911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Parati, Gianfranco; Giglio, Alessia; Lonati, Laura; Destro, Maurizio; Ricci, Alessandra Rossi; Cagnoni, Francesca; Pini, Claudio; Venco, Achille; Maresca, Andrea Maria; Monza, Michela; Grandi, Anna Maria; Omboni, Stefano
2010-07-01
Increasing the dose or adding a second antihypertensive agent are 2 possible therapeutic choices when blood pressure (BP) is poorly controlled with monotherapy. This study investigated the effectiveness and tolerability of barnidipine 10 or 20 mg added to losartan 50 mg versus losartan 100 mg alone in patients with mild to moderate essential hypertension whose BP was uncontrolled by losartan 50-mg monotherapy. This was a 12-week, multicenter, randomized, open-label, parallel-group study. Eligible patients (aged 30-74 years) had uncontrolled hypertension, defined as office sitting diastolic BP (DBP) > or =90 mm Hg and/or systolic BP (SBP) > or =140 mm Hg, and mean daytime DBP > or =85 mm Hg and/or SBP > or =135 mm Hg. All were being treated with losartan 50 mg at enrollment. After a 1-week run-in period while taking losartan 50 mg, patients were randomly assigned to 6 weeks of treatment with open-label barnidipine 10 mg plus losartan 50 mg or losartan 100-mg monotherapy. At the end of this period, patients with uncontrolled BP had barnidipine doubled to 20 mg and continued for an additional 6 weeks, whereas patients not achieving control on treatment with losartan 100 mg were discontinued. Office BP was measured at each visit, whereas 24-hour ambulatory BP monitoring (ABPM) was performed at randomization and at the final visit (ie, after 12 weeks of treatment, or at 6 weeks for patients not controlled on losartan 100 mg). The intent-to-treat population included all randomized patients who received at least one dose of study treatment and had valid ABPM recordings at baseline and the final visit. The primary end point was the change in daytime DBP between baseline and 12 weeks of treatment, compared between the combination treatment and monotherapy. Adverse events (AEs) were evaluated during each study visit. A total of 93 patients were enrolled (age range, 30-75 years; 60% [56/93] men). After the 1-week run-in period, 68 patients were randomly assigned to 6 weeks of treatment with open-label barnidipine 10 mg plus losartan 50 mg (n = 34) or losartan 100-mg monotherapy (n = 34). A total of 53 patients were evaluable (barnidipine plus losartan, n = 28; losartan, n = 25). After 6 weeks of treatment, 18 patients in the combination treatment group (64.3%) had their dose of barnidipine doubled from 10 to 20 mg because BP was not normalized by treatment, whereas 8 patients in the losartan group (32.0%) were discontinued for the same reason. The between-treatment difference (losartan alone - combination treatment) for changes from baseline in daytime DBP was -1.7 mm Hg (95% CI, -5.8 to 2.4 mm Hg; P = NS). A similar result was observed for daytime SBP (-3.2 mm Hg; 95% CI, -8.1 to 1.7 mm Hg; P = NS). Likewise, no significant differences were found for nighttime values (mean [95% CI] DBP, 0.5 mm Hg [-3.7 to 4.7 mm Hg]; SBP, 1.5 mm Hg [-4.1 to 7.1 mm Hg]) or 24-hour values (DBP, -0.9 mm Hg [-4.8 to 2.9 mm Hg]; SBP, -1.6 mm Hg [-5.9 to 2.7 mm Hg]). Combination treatment was associated with a significantly higher rate of SBP responder patients (ie, <140 mm Hg or a reduction of > or =20 mm Hg) compared with monotherapy (82.1% [23/28] vs 56.0% [14/25]; P = 0.044). Drug-related AEs were reported in 4 patients taking combination treatment (total of 7 AEs, including 2 cases of peripheral edema and 1 each of tachycardia, atrial flutter, tinnitus, confusion, and polyuria) and in 2 patients taking losartan alone (total of 2 AEs, both tachycardia). This open-label, parallel-group study found that there was no significant difference in the BP-lowering effect of barnidipine 10 or 20 mg in combination with losartan 50 mg compared with losartan 100-mg monotherapy in these patients with essential hypertension previously uncontrolled by losartan 50-mg monotherapy. However, the percentage of responders for SBP was significantly higher with the combination. Both treatments were generally well tolerated. European Union Drug Regulating Authorities Clinical Trials (EudraCT) no. 2006-001469-41. 2010 Excerpta Medica Inc. All rights reserved.
Alvarez-Sabín, Jose; Santamarina, Estevo; Maisterra, Olga; Jacas, Carlos; Molina, Carlos; Quintana, Manuel
2016-01-01
Stroke, as the leading cause of physical disability and cognitive impairment, has a very significant impact on patients’ quality of life (QoL). The objective of this study is to know the effect of citicoline treatment in Qol and cognitive performance in the long-term in patients with a first ischemic stroke. This is an open-label, randomized, parallel study of citicoline vs. usual treatment. All subjects were selected 6 weeks after suffering a first ischemic stroke and randomized into parallel arms. Neuropsychological evaluation was performed at 1 month, 6 months, 1 year and 2 years after stroke, and QoL was measured using the EuroQoL-5D questionnaire at 2 years. 163 patients were followed during 2 years. The mean age was 67.5 years-old, and 50.9% were women. Age and absence of citicoline treatment were independent predictors of both utility and poor quality of life. Patients with cognitive impairment had a poorer QoL at 2 years (0.55 vs. 0.66 in utility, p = 0.015). Citicoline treatment improved significantly cognitive status during follow-up (p = 0.005). In conclusion, treatment with long-term citicoline is associated with a better QoL and improves cognitive status 2 years after a first ischemic stroke. PMID:26999113
Implementation of highly parallel and large scale GW calculations within the OpenAtom software
NASA Astrophysics Data System (ADS)
Ismail-Beigi, Sohrab
The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.
NASA Astrophysics Data System (ADS)
Sun, Rui; Xiao, Heng
2016-04-01
With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.
Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP
NASA Technical Reports Server (NTRS)
Norton, C.; Balsara, D.
1999-01-01
Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.
Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; VanderWijngaart, Rob F.
2003-01-01
We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
Parallel processing implementation for the coupled transport of photons and electrons using OpenMP
NASA Astrophysics Data System (ADS)
Doerner, Edgardo
2016-05-01
In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.
Goldenberg, N.A.; Abshire, T.; Blatchford, P.J.; Fenton, L.Z.; Halperin, J.L.; Hiatt, W.R.; Kessler, C.M.; Kittelson, J.M.; Manco-Johnson, M.J.; Spyropoulos, A.C.; Steg, P.G.; Stence, N.V.; Turpie, A.G.G.; Schulman, S.
2015-01-01
BACKGROUND Randomized controlled trials (RCTs) in pediatric venous thromboembolism (VTE) treatment have been challenged by unsubstantiated design assumptions and/or poor accrual. Pilot/feasibility (P/F) studies are critical to future RCT success. METHODS Kids-DOTT is a multicenter RCT investigating non-inferiority of a 6-week (shortened) vs. 3-month (conventional) duration of anticoagulation in patients <21 years old with provoked venous thrombosis. Primary efficacy and safety endpoints are symptomatic recurrent VTE at 1 year and anticoagulant-related, clinically-relevant bleeding. In the P/F phase, 100 participants were enrolled in an open, blinded endpoint, parallel-cohort RCT design. RESULTS No eligibility violations or randomization errors occurred. Of enrolled patients, 69% were randomized, 3% missed the randomization window, and 28% were followed in pre-specified observational cohorts for completely occlusive thrombosis or persistent antiphospholipid antibodies. Retention at 1 year was 82%. Inter-observer agreement between local vs. blinded central determination of venous occlusion by imaging at 6 weeks post-diagnosis was strong (κ-statistic=0.75; 95% confidence interval [CI] 0.48–1.0). Primary efficacy and safety event rates were 3.3% (95% CI 0.3–11.5%) and 1.4% (0.03–7.4%). CONCLUSIONS The P/F phase of Kids-DOTT has demonstrated validity of vascular imaging findings of occlusion as a randomization criterion, and defined randomization, retention, and endpoint rates to inform the fully-powered RCT. PMID:26118944
Open ear hearing aids in tinnitus therapy: An efficacy comparison with sound generators.
Parazzini, Marta; Del Bo, Luca; Jastreboff, Margaret; Tognola, Gabriella; Ravazzani, Paolo
2011-08-01
This study aimed to compare the effectiveness of tinnitus retraining therapy (TRT) with sound generators or with open ear hearing aids in the rehabilitation of tinnitus for a group of subjects who, according to Jastreboff categories, can be treated with both approaches to sound therapy (borderline of Category 1 and 2). This study was a prospective data collection with a parallel-group design which entailed that each subject was randomly assigned to one of the two treatments group: half of the subjects were fitted binaurally with sound generators, and the other half with open ear hearing aids. Both groups received the same educational counselling sessions. Ninety-one subjects passed the screening criteria and were enrolled into the study. Structured interviews, with a variety of measures evaluated through the use of visual-analog scales and the tinnitus handicap inventory self-administered questionnaire, were performed before the therapy and at 3, 6, and 12 months during the therapy. Data showed a highly significant improvement in both tinnitus treatments starting from the first three months and up to one year of therapy, with a progressive and statistically significant decrease in the disability every three months. TRT was equally effective with sound generator or open ear hearing aids: they gave basically identical, statistically indistinguishable results.
Bernardo-Escudero, Roberto; Alonso-Campero, Rosalba; Francisco-Doce, María Teresa de Jesús; Cortés-Fuentes, Myriam; Villa-Vargas, Miriam; Angeles-Uribe, Juan
2012-12-01
The study aimed to assess the pharmacokinetics of a new, modified-release metoclopramide tablet, and compare it to an immediate-release tablet. A single and multiple-dose, randomized, open-label, parallel, pharmacokinetic study was conducted. Investigational products were administered to 26 healthy Hispanic Mexican male volunteers for two consecutive days: either one 30 mg modified-release tablet every 24 h, or one 10 mg immediate-release tablet every 8 h. Blood samples were collected after the first and last doses of metoclopramide. Plasma metoclopramide concentrations were determined by high-performance liquid chromatography. Safety and tolerability were assessed through vital signs measurements, clinical evaluations, and spontaneous reports from study subjects. All 26 subjects were included in the analyses [mean (SD) age: 27 (8) years, range 18-50; BMI: 23.65 (2.22) kg/m², range 18.01-27.47)]. Peak plasmatic concentrations were not statistically different with both formulations, but occurred significantly later (p < 0.05) with the modified-release form [tmax: 3.15 (1.28) vs. 0.85 (0.32) h and tmax-ss: 2.92 (1.19) vs. 1.04 (0.43) h]. There was no difference noted in the average plasma concentrations [Cavgτ: 23.90 (7.90) vs. 20.64 (7.43) ng/mL after the first dose; and Cavg-ss: 31.14 (9.64) vs. 35.59 (12.29) ng/mL after the last dose, (p > 0.05)]. One adverse event was reported in the test group (diarrhea), and one in the reference group (headache). This study suggests that the 30 mg modified-release metoclopramide tablets show features compatible with slow-release formulations when compared to immediate-release tablets, and is suitable for once-a-day administration.
Review of Recent Methodological Developments in Group-Randomized Trials: Part 1—Design
Li, Fan; Gallis, John A.; Prague, Melanie; Murray, David M.
2017-01-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis. PMID:28426295
Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design.
Turner, Elizabeth L; Li, Fan; Gallis, John A; Prague, Melanie; Murray, David M
2017-06-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Coherent Patterns in Nuclei and in Financial Markets
NASA Astrophysics Data System (ADS)
DroŻdŻ, S.; Kwapień, J.; Speth, J.
2010-07-01
In the area of traditional physics the atomic nucleus belongs to the most complex systems. It involves essentially all elements that characterize complexity including the most distinctive one whose essence is a permanent coexistence of coherent patterns and of randomness. From a more interdisciplinary perspective, these are the financial markets that represent an extreme complexity. Here, based on the matrix formalism, we set some parallels between several characteristics of complexity in the above two systems. We, in particular, refer to the concept—historically originating from nuclear physics considerations—of the random matrix theory and demonstrate its utility in quantifying characteristics of the coexistence of chaos and collectivity also for the financial markets. In this later case we show examples that illustrate mapping of the matrix formulation into the concepts originating from the graph theory. Finally, attention is drawn to some novel aspects of the financial coherence which opens room for speculation if analogous effects can be detected in the atomic nuclei or in other strongly interacting Fermi systems.
Kario, Kazuomi; Hoshide, Satoshi
2014-06-01
The ACS1 (Azilsartan Circadian and Sleep Pressure - the first study) is a multicenter, randomized, open-label, two parallel-group study carried out to investigate the efficacy of an 8-week oral treatment with azilsartan 20 mg in comparison with amlodipine 5 mg. The patients with stage I or II primary hypertension will be randomly assigned to either an azilsartan group (n=350) or an amlodipine group (n=350). The primary endpoint is a change in nocturnal systolic blood pressure (BP) as measured by ambulatory BP monitoring at the end of follow-up relative to the baseline level during the run-in period. In addition, we will carry out the same analysis after dividing four different nocturnal BP dipping statuses (extreme-dippers, dippers, nondipper, and risers). The findings of this study will help in establishing an appropriate antihypertensive treatment for hypertensive patients with a disrupted circadian BP rhythm.
Hoshide, Satoshi
2014-01-01
Objective The ACS1 (Azilsartan Circadian and Sleep Pressure – the first study) is a multicenter, randomized, open-label, two parallel-group study carried out to investigate the efficacy of an 8-week oral treatment with azilsartan 20 mg in comparison with amlodipine 5 mg. Materials and methods The patients with stage I or II primary hypertension will be randomly assigned to either an azilsartan group (n=350) or an amlodipine group (n=350). The primary endpoint is a change in nocturnal systolic blood pressure (BP) as measured by ambulatory BP monitoring at the end of follow-up relative to the baseline level during the run-in period. In addition, we will carry out the same analysis after dividing four different nocturnal BP dipping statuses (extreme-dippers, dippers, nondipper, and risers). Conclusion The findings of this study will help in establishing an appropriate antihypertensive treatment for hypertensive patients with a disrupted circadian BP rhythm. PMID:24637789
Malmberg Gavelin, Hanna; Eskilsson, Therese; Boraxbekk, Carl-Johan; Josefsson, Maria; Stigsdotter Neely, Anna; Slunga Järvholm, Lisbeth
2018-04-25
Stress-related exhaustion has been associated with selective and enduring cognitive impairments. However, little is known about how to address cognitive deficits in stress rehabilitation and how this influences stress recovery over time. The aim of this open-label, parallel randomized controlled trial (ClinicalTrials.gov: NCT03073772) was to investigate the long-term effects of 12 weeks cognitive or aerobic training on cognitive function, psychological health, and work ability for patients diagnosed with exhaustion disorder (ED). One-hundred-and-thirty-two patients (111 women) participating in multimodal stress rehabilitation were randomized to receive additional cognitive training (n = 44), additional aerobic training (n = 47), or no additional training (n = 41). Treatment effects were assessed before, immediately after and one-year post intervention. The primary outcome was global cognitive function. Secondary outcomes included domain-specific cognition, self-reported burnout, depression, anxiety, fatigue and work ability, aerobic capacity, and sick-leave levels. Intention-to-treat analysis revealed a small but lasting improvement in global cognitive functioning for the cognitive training group, paralleled by a large improvement on a trained updating task. The aerobic training group showed improvements in aerobic capacity and episodic memory immediately after training, but no long-term benefits. General improvements in psychological health and work ability were observed, with no difference between interventional groups. Our findings suggest that cognitive training may be a viable method to address cognitive impairments for patients with ED, whereas the effects of aerobic exercise on cognition may be more limited when performed during a restricted time period. The implications for clinical practice in supporting patients with ED to adhere to treatment are discussed.
Tanaka, Kenichi; Nakayama, Masaaki; Kanno, Makoto; Kimura, Hiroshi; Watanabe, Kimio; Tani, Yoshihiro; Hayashi, Yoshimitsu; Asahi, Koichi; Terawaki, Hiroyuki; Watanabe, Tsuyoshi
2015-12-01
Hyperuricemia is associated with the onset of chronic kidney disease (CKD) and renal disease progression. Febuxostat, a novel, non-purine, selective xanthine oxidase inhibitor, has been reported to have a stronger effect on hyperuricemia than conventional therapy with allopurinol. However, few data are available regarding the clinical effect of febuxostat in patients with CKD. A prospective, randomized, open-label, parallel-group trial was conducted in hyperuricemic patients with stage 3 CKD. Patients were randomly assigned to treatment with febuxostat (n = 21) or to continue conventional therapy (n = 19). Treatment was continued for 12 weeks. The efficacy of febuxostat was determined by monitoring serum uric acid (UA) levels, blood pressures, renal function, and urinary protein levels. In addition, urinary liver-type fatty acid-binding protein (L-FABP), urinary albumin, urinary beta 2 microglobulin (β2MG), and serum high sensitivity C-reactive protein were measured before and 12 weeks after febuxostat was added to the treatment. Febuxostat resulted in a significantly greater reduction in serum UA (-2.2 mg/dL) than conventional therapy (-0.3 mg/dL, P < 0.001). Serum creatinine and estimated glomerular filtration rate changed little during the study period in each group. However, treatment with febuxostat for 12 weeks reduced the urinary levels of L-FABP, albumin, and β2MG, whereas the levels of these markers did not change in the control group. Febuxostat reduced serum UA levels more effectively than conventional therapy and might have a renoprotective effect in hyperuricemic patients with CKD. Further studies should clarify whether febuxostat prevents the progression of renal disease and improves the prognosis of CKD.
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Izgu, Nur; Ozdemir, Leyla; Bugdayci Basal, Fatma
2017-12-02
Patients receiving oxaliplatin may experience peripheral neuropathic pain and fatigue. Aromatherapy massage, a nonpharmacological method, may help to control these symptoms. The aim of this open-label, parallel-group, quasi-randomized controlled pilot study was to investigate the effect of aromatherapy massage on chemotherapy-induced peripheral neuropathic pain and fatigue in patients receiving oxaliplatin. Stratified randomization was used to allocate 46 patients to 2 groups: intervention (n = 22) and control (n = 24). Between week 1 and week 6, participants in the intervention group (IG) received aromatherapy massage 3 times a week. There was no intervention in weeks 7 and 8. The control group (CG) received routine care. Neuropathic pain was identified using the Douleur Neuropathique 4 Questions; severity of painful paresthesia was assessed with the numerical rating scale; fatigue severity was identified with the Piper Fatigue Scale. At week 6, the rate of neuropathic pain was significantly lower in the IG, when compared with the CG. The severity of painful paresthesia based on numerical rating scale in the IG was significantly lower than that in the CG at weeks 2, 4, and 6. At week 8, fatigue severity in the IG was significantly lower when compared with CG (P < .05). Aromatherapy massage may be useful in the management of chemotherapy-induced peripheral neuropathic pain and fatigue. This pilot study suggests that aromatherapy massage may be useful to relieve neuropathic pain and fatigue. However, there is a need for further clinical trials to validate the results of this study.
An evidence-based review of pregabalin for the treatment of fibromyalgia.
Arnold, Lesley M; Choy, Ernest; Clauw, Daniel J; Oka, Hiroshi; Whalen, Ed; Semel, David; Pauer, Lynne; Knapp, Lloyd
2018-04-16
Pregabalin, an α2-δ agonist, is approved for the treatment of fibromyalgia (FM) in the United States, Japan, and 37 other countries. The purpose of this article was to provide an in-depth, evidence-based summary of pregabalin for FM as demonstrated in randomized, placebo-controlled clinical studies, including open-label extensions, meta-analyses, combination studies and post-hoc analyses of clinical study data. PubMed was searched using the term "pregabalin AND fibromyalgia" and the Cochrane Library with the term "pregabalin". Both searches were conducted on 2 March 2017 with no other date limits set. Eleven randomized, double-blind, placebo-controlled clinical studies were identified including parallel group, two-way crossover and randomized withdrawal designs. One was a neuroimaging study. Five open-label extensions were also identified. Evidence of efficacy was demonstrated across the studies identified with significant and clinically relevant improvements in pain, sleep quality and patient status. The safety and tolerability profile of pregabalin is consistent across all the studies identified, including in adolescents, with dizziness and somnolence the most common adverse events reported. These efficacy and safety data are supported by meta-analyses (13 studies). Pregabalin in combination with other pharmacotherapies (7 studies) is also efficacious. Post-hoc analyses have demonstrated the onset of pregabalin efficacy as early as 1-2 days after starting treatment, examined the effect of pregabalin on other aspects of sleep beyond quality, and shown it is effective irrespective of the presence of a wide variety of patient demographic and clinical characteristics. Pregabalin is a treatment option for FM; its clinical utility has been comprehensively demonstrated.
Shrimankar, D D; Sathe, S R
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.
Shrimankar, D. D.; Sathe, S. R.
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
Scalable Unix commands for parallel processors : a high-performance implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, E.; Lusk, E.; Gropp, W.
2001-06-22
We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
2014-01-01
Background Split-mouth randomized controlled trials (RCTs) are popular in oral health research. Meta-analyses frequently include trials of both split-mouth and parallel-arm designs to derive combined intervention effects. However, carry-over effects may induce bias in split- mouth RCTs. We aimed to assess whether intervention effect estimates differ between split- mouth and parallel-arm RCTs investigating the same questions. Methods We performed a meta-epidemiological study. We systematically reviewed meta- analyses including both split-mouth and parallel-arm RCTs with binary or continuous outcomes published up to February 2013. Two independent authors selected studies and extracted data. We used a two-step approach to quantify the differences between split-mouth and parallel-arm RCTs: for each meta-analysis. First, we derived ratios of odds ratios (ROR) for dichotomous data and differences in standardized mean differences (∆SMD) for continuous data; second, we pooled RORs or ∆SMDs across meta-analyses by random-effects meta-analysis models. Results We selected 18 systematic reviews, for 15 meta-analyses with binary outcomes (28 split-mouth and 28 parallel-arm RCTs) and 19 meta-analyses with continuous outcomes (28 split-mouth and 28 parallel-arm RCTs). Effect estimates did not differ between split-mouth and parallel-arm RCTs (mean ROR, 0.96, 95% confidence interval 0.52–1.80; mean ∆SMD, 0.08, -0.14–0.30). Conclusions Our study did not provide sufficient evidence for a difference in intervention effect estimates derived from split-mouth and parallel-arm RCTs. Authors should consider including split-mouth RCTs in their meta-analyses with suitable and appropriate analysis. PMID:24886043
Characterizing Task-Based OpenMP Programs
Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats
2015-01-01
Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023
MacKenzie, K.R.
1958-09-01
An ion source is described for use in a calutron and more particularly deals with an improved filament arrangement for a calutron. According to the invention, the ion source block has a gas ionizing passage open along two adjoining sides of the block. A filament is disposed in overlying relation to one of the passage openings and has a greater width than the passage width, so that both the filament and opening lengths are parallel and extend in a transverse relation to the magnetic field. The other passage opening is parallel to the length of the magnetic field. This arrangement is effective in assisting in the production of a stable, long-lived arc for the general improvement of calutron operation.
Parallel protein secondary structure prediction based on neural networks.
Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi
2004-01-01
Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
ERIC Educational Resources Information Center
Martins, Isabel Pavao; Leal, Gabriela; Fonseca, Isabel; Farrajota, Luisa; Aguiar, Marta; Fonseca, Jose; Lauterbach, Martin; Goncalves, Luis; Cary, M. Carmo; Ferreira, Joaquim J.; Ferro, Jose M.
2013-01-01
Background: There is conflicting evidence regarding the benefits of intensive speech and language therapy (SLT), particularly because intensity is often confounded with total SLT provided. Aims: A two-centre, randomized, rater-blinded, parallel study was conducted to compare the efficacy of 100 h of SLT in a regular (RT) versus intensive (IT)…
Parallel inhomogeneity and the Alfven resonance. 1: Open field lines
NASA Technical Reports Server (NTRS)
Hansen, P. J.; Harrold, B. G.
1994-01-01
In light of a recent demonstration of the general nonexistence of a singularity at the Alfven resonance in cold, ideal, linearized magnetohydrodynamics, we examine the effect of a small density gradient parallel to uniform, open ambient magnetic field lines. To lowest order, energy deposition is quantitatively unaffected but occurs continuously over a thickened layer. This effect is illustrated in a numerical analysis of a plasma sheet boundary layer model with perfectly absorbing boundary conditions. Consequences of the results are discussed, both for the open field line approximation and for the ensuing closed field line analysis.
Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2011-09-06
We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.
Parallel implementation of approximate atomistic models of the AMOEBA polarizable model
NASA Astrophysics Data System (ADS)
Demerdash, Omar; Head-Gordon, Teresa
2016-11-01
In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.
A Programming Model Performance Study Using the NAS Parallel Benchmarks
Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...
2010-01-01
Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Kim, Bong Hyun; Kim, Kyuseok; Nam, Hae Jeong
2017-01-31
Many previous studies of electroacupuncture used combined therapy of electroacupuncture and systemic manual acupuncture, so it was uncertain which treatment was effective. This study evaluated and compared the effects of systemic manual acupuncture, periauricular electroacupuncture and distal electroacupuncture for treating patients with tinnitus. A randomized, parallel, open-labeled exploratory trial was conducted. Subjects aged 20-75 years who had suffered from idiopathic tinnitus for > 2 weeks were recruited from May 2013 to April 2014. The subjects were divided into three groups by systemic manual acupuncture group (MA), periauricular electroacupuncture group (PE), and distal electroacupuncture group (DE). The groups were selected by random drawing. Nine acupoints (TE 17, TE21, SI19, GB2, GB8, ST36, ST37, TE3 and TE9), two periauricular acupoints (TE17 and TE21), and four distal acupoints (TE3, TE9, ST36, and ST37) were selected. The treatment sessions were performed twice weekly for a total of eight sessions over 4 weeks. Outcomes were the tinnitus handicap inventory (THI) score and the loud and uncomfortable visual analogue scales (VAS). Demographic and clinical characteristics of all participants were compared between the groups upon admission using one-way analysis of variance (ANOVA). One-way ANOVA was used to evaluate the THI, VAS loud , and VAS uncomfortable scores. The least significant difference test was used as a post-hoc test. Thirty-nine subjects were eligible and their data were analyzed. No difference in THI and VAS loudness scores was observed in between groups. The VAS uncomfortable scores decreased significantly in MA and DE compared with those in PE. Within the group, all three treatments showed some effect on THI, VAS loudness scores and VAS uncomfortable scores after treatment except DE in THI. There was no statistically significant difference between systemic manual acupuncture, periauricular electroacupuncture and distal electroacupuncture in tinnitus. However, all three treatments had some effect on tinnitus within the group before and after treatment. Systemic manual acupuncture and distal electroacupuncture have some effect on VAS uncomfortable . KCT0001991 by CRIS (Clinical Research Information Service), 2016-8-1, retrospectively registered.
Ueda, Tamenobu; Kai, Hisashi; Imaizumi, Tsutomu
2012-07-01
The treatment of morning hypertension has not been established. We compared the efficacy and safety of a losartan/hydrochlorothiazide (HCTZ) combination and high-dose losartan in patients with morning hypertension. A prospective, randomized, open-labeled, parallel-group, multicenter trial enrolled 216 treated outpatients with morning hypertension evaluated by home blood pressure (BP) self-measurement. Patients were randomly assigned to receive a combination therapy of 50 mg losartan and 12.5 mg HCTZ (n=109) or a high-dose therapy with 100 mg losartan (n=107), each of which were administered once every morning. Primary efficacy end points were morning systolic BP (SBP) level and target BP achievement rate after 3 months of treatment. At baseline, BP levels were similar between the two therapy groups. Morning SBP was reduced from 150.3±10.1 to 131.5±11.5 mm Hg by combination therapy (P<0.001) and from 151.0±9.3 to 142.5±13.6 mm Hg by high-dose therapy (P<0.001). The morning SBP reduction was greater in the combination therapy group than in the high-dose therapy group (P<0.001). Combination therapy decreased evening SBP from 141.6±13.3 to 125.3±13.1 mm Hg (P<0.001), and high-dose therapy decreased evening SBP from 138.9±9.9 to 131.4±13.2 mm Hg (P<0.01). Although both therapies improved target BP achievement rates in the morning and evening (P<0.001 for both), combination therapy increased the achievement rates more than high-dose therapy (P<0.001 and P<0.05, respectively). In clinic measurements, combination therapy was superior to high-dose therapy in reducing SBP and improving the achievement rate (P<0.001 and P<0.01, respectively). Combination therapy decreased urine albumin excretion (P<0.05) whereas high-dose therapy reduced serum uric acid. Both therapies indicated strong adherence and few adverse effects (P<0.001). In conclusion, losartan/HCTZ combination therapy was more effective for controlling morning hypertension and reducing urine albumin than high-dose losartan.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195
Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, C; Quinlan, D J; Willcock, J J
2008-12-12
Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .
Robinson, Thomas N; Jones, Edward L; Dunn, Christina L; Dunne, Bruce; Johnson, Elizabeth; Townsend, Nicole T; Paniccia, Alessandro; Stiegmann, Greg V
2015-06-01
The monopolar "Bovie" is used in virtually every laparoscopic operation. The active electrode and its cord emit radiofrequency energy that couples (or transfers) to nearby conductive material without direct contact. This phenomenon is increased when the active electrode cord is oriented parallel to another wire/cord. The parallel orientation of the "Bovie" and laparoscopic camera cords cause transfer of energy to the camera cord resulting in cutaneous burns at the camera trocar incision. We hypothesized that separating the active electrode/camera cords would reduce thermal injury occurring at the camera trocar incision in comparison to parallel oriented active electrode/camera cords. In this prospective, blinded, randomized controlled trial, patients undergoing standardized laparoscopic cholecystectomy were randomized to separated active electrode/camera cords or parallel oriented active electrode/camera cords. The primary outcome variable was thermal injury determined by histology from skin biopsied at the camera trocar incision. Eighty-four patients participated. Baseline demographics were similar in the groups for age, sex, preoperative diagnosis, operative time, and blood loss. Thermal injury at the camera trocar incision was lower in the separated versus parallel group (31% vs 57%; P = 0.027). Separation of the laparoscopic camera cord from the active electrode cord decreases thermal injury from antenna coupling at the camera trocar incision in comparison to the parallel orientation of these cords. Therefore, parallel orientation of these cords (an arrangement promoted by integrated operating rooms) should be abandoned. The findings of this study should influence the operating room setup for all laparoscopic cases.
Teodorescu, C; Young, W C; Swan, G W S; Ellis, R F; Hassam, A B; Romero-Talamas, C A
2010-08-20
Interferometric density measurements in plasmas rotating in shaped, open magnetic fields demonstrate strong confinement of plasma parallel to the magnetic field, with density drops of more than a factor of 10. Taken together with spectroscopic measurements of supersonic E × B rotation of sonic Mach 2, these measurements are in agreement with ideal MHD theory which predicts large parallel pressure drops balanced by centrifugal forces in supersonically rotating plasmas.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
Kim, Soo Jin; Lee, Young-Ki; Oh, Jieun; Cho, AJin; Noh, Jung Woo
2017-09-15
The association between the dialysate calcium level and coronary artery calcification (CAC) has not yet been evaluated in hemodialysis patients. The objective of this study was to determine whether lowering the dialysate calcium levels would decrease the progression of coronary artery calcification (CAC) compared to using standard calcium dialysate. We conducted an open-label randomized trial with parallel groups. The patients were randomly assigned to either 12-month treatment with low calcium dialysate (LCD; 1.25mmol/L, n=36) or standard calcium dialysate (SCD; 1.5mmol/L, n=40). The primary outcome was the change in the CAC scores assessed by 64-slice multidetector computed tomography after 12months. During the treatment period, CAC scores increased in both groups, especially significant in LCD group (402.5±776.8, 580.5±1011.9, P=0.004). When we defined progressors as patients at second and third tertiles of CAC changes, progressor group had a higher proportion of LCD-treated patients than SCD-treated patients (P=0.0229). In multivariate analysis, LCD treatment is a significant risk factor for increase in CAC scores (odds ratio=5.720, 95% CI: 1.219-26.843, P=0.027). Use of LCD may accelerate the progression of CAC in patients with chronic hemodialysis over a 12-month period. Clinical Research Information Service [Internet]; Osong (Chungcheongbuk-do): Korea Centers for Disease Control and Prevention, Ministry of Health and Welfare (Republic of Korea), 2010: KCT0000942. Available from: https://cris.nih.go.kr/cris/search/search_result_st01_kren.jsp?seq=3572&sLeft=2&type=my. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Borius, Pierre-Yves; Garnier, Stéphanie Ranque; Baumstarck, Karine; Castinetti, Frédéric; Donnet, Anne; Guedj, Eric; Cornu, Philippe; Blond, Serge; Salas, Sébastien; Régis, Jean
2017-08-02
Hypophysectomy performed by craniotomy or percutaneous techniques leads to complete pain relief in more than 70% to 80% of cases for opioid refractory cancer pain. Radiosurgery could be an interesting alternative approach to reduce complications. To assess the analgesic efficacy compared with standard of care is the primary goal. The secondary objectives are to assess ophthalmic and endocrine tolerance, drug consumption, quality of life, and mechanisms of analgesic action. The trial is multicenter, randomized, prospective, and open-label with 2 parallel groups. This concerns patients in palliative care suffering from nociceptive or mixed cancer pain, refractory to standard opioid therapy. Participants will be randomly assigned to the control group receiving standards of care for pain according to recommendations, or to the experimental group receiving a pituitary GammaKnife (Elekta, Stockholm, Sweden) radiosurgery (160 Gy delivered in pituitary gland) associated with standards of care. Evaluation assessments will be taken at baseline, day0, day4, day7, day14, day28, day45, month3, and month6. We could expect pain improvement in 70% to 90% of cases at day4. In addition we will assess the safety of pituitary radiosurgery in a vulnerable population. The secondary endpoints could show decay of opioid consumption, good patient satisfaction, and improvement of the quality of life. The design of this study is potentially the most appropriate to demonstrate the efficacy and safety of radiosurgery for this new indication. New recommendations could be obtained in order to improve pain relief and quality of life. Copyright © 2017 by the Congress of Neurological Surgeons
ERIC Educational Resources Information Center
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo
2012-01-01
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Placebo Effects and the Common Cold: A Randomized Controlled Trial
Barrett, Bruce; Brown, Roger; Rakel, Dave; Rabago, David; Marchand, Lucille; Scheder, Jo; Mundt, Marlon; Thomas, Gay; Barlow, Shari
2011-01-01
PURPOSE We wanted to determine whether the severity and duration of illness caused by the common cold are influenced by randomized assignment to open-label pills, compared with conventional double-blind allocation to active and placebo pills, compared with no pills at all. METHODS We undertook a randomized controlled trial among a population with new-onset common cold. Study participants were allocated to 4 parallel groups: (1) those receiving no pills, (2) those blinded to placebo, (3) those blinded to echinacea, and (4) those given open-label echinacea. Primary outcomes were illness duration and area-under-the-curve global severity. Secondary outcomes included neutrophil count and interleukin 8 levels from nasal wash at intake and 2 days later. RESULTS Of 719 randomized study participants, 2 were lost and 4 exited early. Participants were 64% female, 88% white, and aged 12 to 80 years. Mean illness duration for each group was 7.03 days for those in the no-pill group, 6.87 days for those blinded to placebo, 6.34 days for those blinded to echinacea, and 6.76 days for those in the open-label echinacea group. Mean global severity scores for the 4 groups were no pills, 286; blinded to placebo, 264; blinded to echinacea, 236; and open-label echinacea, 258. Between-group differences were not statistically significant. Comparing the no-pill with blinded to placebo groups, differences (95% confidence interval [CI]) were −0.16 days (95% CI, −0.90 to 0.58 days) for illness duration and −22 severity points (95% CI, −70 to 26 points) for global severity. Comparing the group blinded to echinacea with the open-label echinacea group, differences were 0.42 days (95% CI, −0.28 to 1.12 days) and 22 severity points (95% CI, −19 to 63 points). Median change in interleukin 8 concentration and neutrophil cell count, respectively by group, were 30 pg/mL and 1 cell for the no-pill group, 39 pg/mL and 1 cell for the group binded to placebo, 58 pg/mL and 2 cells for the group blinded to echinacea, and 70 pg/mL and 1 cell for the group with open-label echinacea, also not statistically significant. Among the 120 participants who at intake rated echinacea’s effectiveness as greater than 50 on a 100-point scale for which 100 is extremely effective, illness duration was 2.58 days shorter (95% CI, −4.47 to −0.68 days) in those blinded to placebo rather than no pill, and mean global severity score was 26% lower but not significantly different (−97.0, 95% CI, −249.8 to 55.8 points). In this subgroup, neither duration nor severity differed significantly between the group blinded to echinacea and the open-label echinacea group. CONCLUSIONS Participants randomized to the no-pill group tended to have longer and more severe illnesses than those who received pills. For the subgroup who believed in echinacea and received pills, illnesses were substantively shorter and less severe, regardless of whether the pills contained echinacea. These findings support the general idea that beliefs and feelings about treatments may be important and perhaps should be taken into consideration when making medical decisions. PMID:21747102
Simulation of partially coherent light propagation using parallel computing devices
NASA Astrophysics Data System (ADS)
Magalhães, Tiago C.; Rebordão, José M.
2017-08-01
Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Optimising the Parallelisation of OpenFOAM Simulations
2014-06-01
UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI
Mantle flow through a tear in the Nazca slab inferred from shear wave splitting
NASA Astrophysics Data System (ADS)
Lynner, Colton; Anderson, Megan L.; Portner, Daniel E.; Beck, Susan L.; Gilbert, Hersh
2017-07-01
A tear in the subducting Nazca slab is located between the end of the Pampean flat slab and normally subducting oceanic lithosphere. Tomographic studies suggest mantle material flows through this opening. The best way to probe this hypothesis is through observations of seismic anisotropy, such as shear wave splitting. We examine patterns of shear wave splitting using data from two seismic deployments in Argentina that lay updip of the slab tear. We observe a simple pattern of plate-motion-parallel fast splitting directions, indicative of plate-motion-parallel mantle flow, beneath the majority of the stations. Our observed splitting contrasts previous observations to the north and south of the flat slab region. Since plate-motion-parallel splitting occurs only coincidentally with the slab tear, we propose mantle material flows through the opening resulting in Nazca plate-motion-parallel flow in both the subslab mantle and mantle wedge.
Guo, L-X; Li, J; Zeng, H
2009-11-01
We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.
Du, Jin; Liang, Li; Fang, Hui; Xu, Fengmei; Li, Wei; Shen, Liya; Wang, Xueying; Xu, Chun; Bian, Fang; Mu, Yiming
2017-11-01
To investigate the efficacy, safety and tolerability of saxagliptin compared with acarbose in Chinese patients with type 2 diabetes mellitus inadequately controlled with metformin monotherapy. SMART was a 24-week, multicentre, randomized, parallel-group, open-label Phase IV study conducted at 35 sites in China (September 24, 2014 to September 29, 2015). The primary outcome was absolute change from baseline in HbA1c at Week 24. Secondary outcomes assessed at Week 24 included the proportion of patients achieving HbA1c < 7.0%, the proportion of patients with gastrointestinal adverse events (GI AEs), and the proportion of patients achieving HbA1c < 7.0% without GI AEs. Safety and tolerability were also assessed in all patients who received ≥1 dose of study medication. Four-hundred and eighty-eight patients were randomized (1:1) to saxagliptin or acarbose via a central randomization system (interactive voice/web response system); 241 and 244 patients received saxagliptin and acarbose, respectively, and 238 and 243 of these had ≥1 pre- and ≥1 post-baseline efficacy values recorded. Saxagliptin was non-inferior to acarbose for glycaemic control [Week 24 HbA1c change: -0.82% and -0.78%, respectively; difference (95% confidence interval): -0.04 (-0.22, 0.13)%], with similar proportions of patients in both treatment groups achieving HbA1c < 7.0%. However, fewer GI AEs were reported with saxagliptin compared with acarbose, and a greater number of patients who received saxagliptin achieved HbA1c < 7.0% without GI AEs compared with those receiving acarbose. Both therapies had similar efficacy profiles. However, saxagliptin was associated with fewer GI AEs, suggesting it might be preferential for clinical practice. NCT02243176, clinicaltrials.gov. © 2017 John Wiley & Sons Ltd.
OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux
NASA Astrophysics Data System (ADS)
Gunawan, P. H.; Pahlevi, M. R.
2018-03-01
The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.
Zhang, S-X; Huang, F; Gates, M; Shen, X; Holmberg, E G
2016-11-01
This is a randomized controlled prospective trial with two parallel groups. The objective of this study was to determine whether early application of tail nerve electrical stimulation (TANES)-induced walking training can improve the locomotor function. This study was conducted in SCS Research Center in Colorado, USA. A contusion injury to spinal cord T10 was produced using the New York University impactor device with a 25 -mm height setting in female, adult Long-Evans rats. Injured rats were randomly divided into two groups (n=12 per group). One group was subjected to TANES-induced walking training 2 weeks post injury, and the other group, as control, received no TANES-induced walking training. Restorations of behavior and conduction were assessed using the Basso, Beattie and Bresnahan open-field rating scale, horizontal ladder rung walking test and electrophysiological test (Hoffmann reflex). Early application of TANES-induced walking training significantly improved the recovery of locomotor function and benefited the restoration of Hoffmann reflex. TANES-induced walking training is a useful method to promote locomotor recovery in rats with spinal cord injury.
Lodén, Marie; Wirén, Karin; Smerud, Knut; Meland, Nils; Hønnås, Helge; Mørk, Gro; Lützow-Holm, Claus; Funk, Jörgen; Meding, Birgitta
2010-11-01
Hand eczema influences the quality of life. Management strategies include the use of moisturizers. In the present study the time to relapse of eczema during treatment with a barrier-strengthening moisturizer (5% urea) was compared with no treatment (no medical or non-medicated preparations) in 53 randomized patients with successfully treated hand eczema. The median time to relapse was 20 days in the moisturizer group compared with 2 days in the no treatment group (p = 0.04). Eczema relapsed in 90% of the patients within 26 weeks. No difference in severity was noted between the groups at relapse. Dermatology Life Quality Index (DLQI) increased significantly in both groups; from 4.7 to 7.1 in the moisturizer group and from 4.1 to 7.8 in the no treatment group (p < 0.01) at the time of relapse. Hence, the application of moisturizers seems to prolong the disease-free interval in patients with controlled hand eczema. Whether the data is applic-able to moisturizers without barrier-strengthening properties remains to be elucidated.
Texture analysis of radiometric signatures of new sea ice forming in Arctic leads
NASA Technical Reports Server (NTRS)
Eppler, Duane T.; Farmer, L. Dennis
1991-01-01
Analysis of 33.6-GHz, high-resolution, passive microwave images suggests that new sea ice accumulating in open leads is characterized by a unique textural signature which can be used to discriminate new ice forming in this environment from adjacent surfaces of similar radiometric temperature. Ten training areas were selected from the data set, three of which consisted entirely of first-year ice, four entirely of multilayer ice, and three of new ice in open leads in the process of freezing. A simple gradient operator was used to characterize the radiometric texture in each training region in terms of the degree to which radiometric gradients are oriented. New ice in leads has a sufficiently high proportion of well-oriented features to distinguish it uniquely from first-year ice and multiyear ice. The predominance of well-oriented features probably reflects physical processes by which new ice accumulates in open leads. Banded structures, which are evident in aerial photographs of new ice, apparently give rise to the radiometric signature observed, in which the trend of brightness temperature gradients is aligned parallel to lead trends. First-year ice and multiyear ice, which have been subjected to a more random growth and process history, lack this banded structure and therefore are characterized by signatures in which well-aligned elements are less dominant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Zehua; Tang Xianzhu
Parallel transport of long mean-free-path plasma along an open magnetic field line is characterized by strong temperature anisotropy, which is driven by two effects. The first is magnetic moment conservation in a non-uniform magnetic field, which can transfer energy between parallel and perpendicular degrees of freedom. The second is decompressional cooling of the parallel temperature due to parallel flow acceleration by conventional presheath electric field which is associated with the sheath condition near the wall surface where the open magnetic field line intercepts the discharge chamber. To the leading order in gyroradius to system gradient length scale expansion, the parallelmore » transport can be understood via the Chew-Goldbeger-Low (CGL) model which retains two components of the parallel heat flux, i.e., q{sub n} associated with the parallel thermal energy and q{sub s} related to perpendicular thermal energy. It is shown that in addition to the effect of magnetic field strength (B) modulation, the two components (q{sub n} and q{sub s}) of the parallel heat flux play decisive roles in the parallel variation of the plasma profile, which includes the plasma density (n), parallel flow (u), parallel and perpendicular temperatures (T{sub Parallel-To} and T{sub Up-Tack }), and the ambipolar potential ({phi}). Both their profile (q{sub n}/B and q{sub s}/B{sup 2}) and the upstream values of the ratio of the conductive and convective thermal flux (q{sub n}/nuT{sub Parallel-To} and q{sub s}/nuT{sub Up-Tack }) provide the controlling physics, in addition to B modulation. The physics described by the CGL model are contrasted with those of the double-adiabatic laws and further elucidated by comparison with the first-principles kinetic simulation for a specific but representative flux expander case.« less
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
Wiegersma, Marian; Panman, Chantal M C R; Kollen, Boudewijn J; Vermeulen, Karin M; Schram, Aaltje J; Messelink, Embert J; Berger, Marjolein Y; Lisman-Van Leeuwen, Yvonne; Dekker, Janny H
2014-02-01
Pelvic floor muscle training (PFMT) and pessaries are commonly used in the conservative treatment of pelvic organ prolapse (POP). Because there is a lack of evidence regarding the optimal choice between these two interventions, we designed the "Pelvic Organ prolapse in primary care: effects of Pelvic floor muscle training and Pessary treatment Study" (POPPS). POPPS consists of two parallel open label randomized controlled trials performed in primary care, in women aged ≥55 years, recruited through a postal questionnaire. In POPPS trial 1, women with mild POP receive either PFMT or watchful waiting. In POPPS trial 2, women with advanced POP receive either PFMT or pessary treatment. Patient recruitment started in 2009 and was finished in December 2012. Primary outcome of both POPPS trials is improvement in POP-related symptoms. Secondary outcomes are quality of life, sexual function, POP-Q stage, pelvic floor muscle function, post-void residual volume, patients' perception of improvement, and costs. All outcomes are measured 3, 12, and 24 months after the start of treatment. Cost-effectiveness will be calculated based on societal costs, using the PFDI-20 and the EQ-5D as outcomes. In this paper the POPPS design, the encountered challenges and our solutions, and participant baseline characteristics are presented. For both trials the target numbers of patients in each treatment group are achieved, giving this study sufficient power to lead to promising results. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Meirik, Olav; Brache, Vivian; Orawan, Kiriwat; Habib, Ndema Abu; Schmidt, Johannes; Ortayli, Nuriye; Culwell, Kelly; Jackson, Emily; Ali, Moazzam
2013-01-01
Comparative data on etonogestrel and two-rod levonorgestrel contraceptive implants are lacking. A multicenter, open, parallel-group trial with random allocation of implants was performed. For every second implant user, an age-matched woman choosing an intrauterine device (IUD) (TCu380A) was admitted. Methods and data on implant/IUD insertion and 6-week follow-up are reported. A total of 2008 women were randomized to an implant, and 974 women were enrolled in the IUD group. Results from 997 etonogestrel implant users, 997 levonorgestrel implant users and 971 IUD users were analyzed. In the etonogestrel and levonorgestrel groups, respectively, mean insertion durations were 51 (SD 50.2) s and 88 (SD 60.8) s; complication rates at insertion were 0.8% and 0.2%; and at follow-up, 27.2% and 26.7% of women, respectively, had signs or symptoms at the insertion site. At follow-up within 6 weeks after insertion, all implants were in situ, while 2.1% of IUDs were expelled. Performance of etonogestrel and levonorgestrel implants at insertion and within the first 6 weeks is similar. Short-term (6 weeks) continuation rates appear higher for implants than TCu380A. Copyright © 2013 Elsevier Inc. All rights reserved.
Yoo, Jung-Hwa; Yim, Sung-Vin
2018-01-01
Background Bojungikki-tang (BJIKT) is a widely used traditional herbal formula in China, Japan, and Korea. There have been reports that several herbs among BJIKT have interactions with antiplatelet drugs, such as aspirin. This study aimed to assess whether BJIKT interacts with aspirin in terms of pharmacokinetics (PK) and pharmacodynamics (PD) in healthy subjects and ischemic stroke patients. Methods The phase I interaction trial was a randomized, open-label, crossover study of 10 healthy male subjects, and the phase III interaction trial was a randomized, placebo-controlled, parallel study of 43 ischemic stroke patients. Each participant randomly received aspirin + BJIKT or aspirin + placebo. For PK analysis, plasma acetyl salicylic acid (ASA) and salicylic acid (SA) were evaluated, and, for PD analysis, platelet aggregation and plasma thromboxane B2 (TxB2) were measured. Results In the PK parameters, mean area under curve, maximum concertation, and peak concentration time of ASA and SA were not different between two groups in healthy subjects and ischemic stroke patients. In the PD profiles, TxB2 concentrations and platelet aggregation were not affected by coadministration of BJIKT in healthy subjects and ischemic stroke patients. Conclusions These results suggest that coadministration of BJIKT with aspirin may not result in herb-drug interaction. PMID:29599812
Ali, Mohammed K; Amin, Maggy E; Amin, Ahmed F; Abd El Aal, Diaa Eldeen M
2017-03-01
To test the effect of aspirin and omega 3 on fetal weight as well as feto-maternal blood flow in asymmetrical intrauterine growth restriction (IUGR). This study is a clinically registered (NCT02696577), open, parallel, randomized controlled trial, conducted at Assiut Woman's Health Hospital, Egypt including 80 pregnant women (28-30 weeks) with IUGR. They were randomized either to group I: aspirin or group II: aspirin plus omega 3. The primary outcome was the fetal weight after 6 weeks of treatment. Secondary outcomes included Doppler blood flow changes in both uterine and umbilical arteries, birth weight, time and method of delivery and admission to NICU. The outcome variables were analyzed using paired and unpaired t-test. The estimated fetal weight increased significant in group II more than group I (p=0.00). The uterine and umbilical arteries blood flow increased significantly in group II (p<0.05). The birth weight in group II was higher than that observed in group I (p<0.05). The using of aspirin with omega 3 is more effective than using aspirin only in increasing fetal weight and improving utero-placental blood flow in IUGR. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Laser Safety Method For Duplex Open Loop Parallel Optical Link
Baumgartner, Steven John; Hedin, Daniel Scott; Paschal, Matthew James
2003-12-02
A method and apparatus are provided to ensure that laser optical power does not exceed a "safe" level in an open loop parallel optical link in the event that a fiber optic ribbon cable is broken or otherwise severed. A duplex parallel optical link includes a transmitter and receiver pair and a fiber optic ribbon that includes a designated number of channels that cannot be split. The duplex transceiver includes a corresponding transmitter and receiver that are physically attached to each other and cannot be detached therefrom, so as to ensure safe, laser optical power in the event that the fiber optic ribbon cable is broken or severed. Safe optical power is ensured by redundant current and voltage safety checks.
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms
Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...
2016-09-21
In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less
DRY CUPPING IN CHILDREN WITH FUNCTIONAL CONSTIPATION: A RANDOMIZED OPEN LABEL CLINICAL TRIAL.
Shahamat, Mahmoud; Daneshfard, Babak; Najib, Khadijeh-Sadat; Dehghani, Seyed Mohsen; Tafazoli, Vahid; Kasalaei, Afshineh
2016-01-01
As a common disease in pediatrics, constipation poses a high burden to the community. In this study, we aimed to investigate the efficacy of dry cupping therapy (an Eastern traditional manipulative therapy) in children with functional constipation. One hundred and twenty children (4-18 years old) diagnosed as functional constipation according to ROME III criteria were assigned to receive a traditional dry cupping protocol on the abdominal wall for 8 minutes every other day or standard laxative therapy (Polyethylene glycol (PEG) 40% solution without electrolyte), 0.4 g/kg once daily) for 4 weeks, in an open label randomized controlled clinical trial using a parallel design with a 1:1 allocation ratio. Patients were evaluated prior to and following 2, 4, 8 and 12 weeks of the intervention commencement in terms of the ROME III criteria for functional constipation. There were no significant differences between the two arms regarding demographic and clinical basic characteristics. After two weeks of the intervention, there was a significant better result in most of the items of ROME III criteria of patients in PEG group. In contrast, after four weeks of the intervention, the result was significantly better in the cupping group. There was no significant difference in the number of patients with constipation after 4 and 8 weeks of the follow-up period. This study showed that dry cupping of the abdominal wall, as a traditional manipulative therapy, can be as effective as standard laxative therapy in children with functional constipation.
The studies of FT-IR and CD spectroscopy on catechol oxidase I from tobacco
NASA Astrophysics Data System (ADS)
Xiao, Hourong; Xie, Yongshu; Liu, Qingliang; Xu, Xiaolong; Shi, Chunhua
2005-10-01
A novel copper-containing enzyme named COI (catechol oxidase I) has been isolated and purified from tobacco by extracting acetone-emerged powder with phosphate buffer, centrifugation at low temperature, ammonium sulfate fractional precipitation, and column chromatography on DEAE-sephadex (A-50), sephadex (G-75), and DEAE-celluse (DE-52). PAGE, SDS-PAGE were used to detect the enzyme purity, and to determine its molecular weight. Then the secondary structures of COI at different pH, different temperatures and different concentrations of guanidine hydrochloride (GdnHCl) were studied by the FT-IR, Fourier self-deconvolution spectra, and circular dichroism (CD). At pH 2.0, the contents of both α-helix and anti-parallel β-sheet decrease, and that of random coil increases, while β-turn is unchanged compared with the neutral condition (pH 7.0). At pH 11.0, the results indicate that the contents of α-helix, anti-parallel β-sheet and β-turn decrease, while random coil structure increases. According to the CD measurements, the relative average fractions of α-helix, anti-parallel β-sheet, β-turn/parallel β-sheet, aromatic residues and disulfide bond, and random coil/γ-turn are 41.7%, 16.7%, 23.5%, 11.3%, and 6.8% at pH 7.0, respectively, while 7.2%, 7.7%, 15.2%, 10.7%, 59.2% at pH 2.0, and 20.6%, 9.5%, 15.2%, 10.5%, 44.2% at pH 11.0. Both α-helix and random coil decrease with temperature increasing, and anti-parallel β-sheet increases at the same time. After incubated in 6 mol/L guanidine hydrochloride for 30 min, the fraction of α-helix almost disappears (only 1.1% left), while random coil/γ-turn increases to 81.8%, which coincides well with the results obtained through enzymatic activity experiment.
Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...
2016-11-16
Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul
2010-11-23
A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Nodes vary a choice of routing policy for routing data in the network in a semi-random manner, so that similarly situated packets are not always routed along the same path. Semi-random variation of the routing policy tends to avoid certain local hot spots of network activity, which might otherwise arise using more consistent routing determinations. Preferably, the originating node chooses a routing policy for a packet, and all intermediate nodes in the path route the packet according to that policy. Policies may be rotated on a round-robin basis, selected by generating a random number, or otherwise varied.
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Gertsik, Lev; Favreau, Joya T.; Smith, Shawnee I.; Mirocha, James M.; Rao, Uma; Daar, Eric S.
2013-01-01
Abstract Objectives The study objectives were to determine whether massage therapy reduces symptoms of depression in subjects with human immunodeficiency virus (HIV) disease. Design Subjects were randomized non-blinded into one of three parallel groups to receive Swedish massage or to one of two control groups, touch or no intervention for eight weeks. Settings/location The study was conducted at the Department of Psychiatry and Behavioral Neurosciences at Cedars-Sinai Medical Center in Los Angeles, California, which provided primary clinical care in an institutional setting. Subjects Study inclusion required being at least 16 years of age, HIV-seropositive, with a diagnosis of major depressive disorder. Subjects had to be on a stable neuropsychiatric, analgesic, and antiretroviral regimen for >30 days with no plans to modify therapy for the duration of the study. Approximately 40% of the subjects were currently taking antidepressants. All subjects were medically stable. Fifty-four (54) subjects were randomized, 50 completed at least 1 week (intent-to-treat; ITT), and 37 completed the study (completers). Interventions Swedish massage and touch subjects visited the massage therapist for 1 hour twice per week. The touch group had a massage therapist place both hands on the subject with slight pressure, but no massage, in a uniform distribution in the same pattern used for the massage subjects. Outcome measures The primary outcome measure was the Hamilton Rating Scale for Depression score, with the secondary outcome measure being the Beck Depression Inventory. Results For both the ITT and completers analyses, massage significantly reduced the severity of depression beginning at week 4 (p≤0.04) and continuing at weeks 6 (p≤0.03) and 8 (p≤0.005) compared to no intervention and/or touch. Conclusions The results indicate that massage therapy can reduce symptoms of depression in subjects with HIV disease. The durability of the response, optimal “dose” of massage, and mechanisms by which massage exerts its antidepressant effects remain to be determined. PMID:23098696
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Bartl, Christoph; Stengel, Dirk; Bruckner, Thomas; Rossion, Inga; Luntz, Steffen; Seiler, Christoph; Gebhard, Florian
2011-03-22
Fractures of the distal radius represent the most common fracture in elderly patients, and often indicate the onset of symptomatic osteoporosis. A variety of treatment options is available, including closed reduction and plaster casting, K-wire-stabilization, external fixation and open reduction and internal fixation (ORIF) with volar locked plating. The latter is widely promoted by clinicians and hardware manufacturers. Closed reduction and cast stabilization for six weeks is a simple, convenient, and ubiquitously available intervention. In contrast, ORIF requires hospitalization, but allows for functional rehabilitation.Given the lack of randomized controlled trials, it remains unclear whether ORIF leads to better functional outcomes one year after injury than closed reduction and casting. ORCHID (Open reduction and internal fixation versus casting for highly comminuted intra-articular fractures of the distal radius) is a pragmatic, randomized, multi-center, clinical trial with two parallel treatment arms. It is planned to include 504 patients in 15 participating centers throughout Germany over a three-year period. Patients are allocated by a central web-based randomization tool.The primary objective is to determine differences in the Short Form 36 (SF-36) Physical Component Score (PCS) between volar locked plating and closed reduction and casting of intraarticular, comminuted distal radius fractures in patients > 65 years of age one year after the fracture. Secondary outcomes include differences in other SF-36 dimensions, the EuroQol-5D questionnaire, the Disability of the Arm, Shoulder, and Hand (DASH) instrument. Also, the range of motion in the affected wrist, activities of daily living, complications (including secondary ORIF and revision surgery), as well as serious adverse events will be assessed. Data obtained during the trial will be used for later health-economic evaluations. The trial architecture involves a central statistical unit, an independent monitoring institute, and a data safety monitoring board. Following approval by the institutional review boards of all participating centers, conduct and reporting will strictly adhere to national and international rules, regulations, and recommendations (e.g., Good Clinical Practice, data safety laws, and EQUATOR/CONSORT proposals). To our knowledge, ORCHID is the first multicenter RCT designed to assess quality of life and functional outcomes following operative treatment compared to conservative treatment of complex, intra-articular fractures of the distal radius in elderly patients. The results are expected to influence future treatment recommendations and policies on an international level. ISRCTN: ISRCTN76120052 Registration date: 31.07.2008; Randomization of first patient: 15.09.2008.
TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, S; Nazareth, D; Bellor, M
Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less
OPAL: An Open-Source MPI-IO Library over Cray XT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Weikuan; Vetter, Jeffrey S; Canon, Richard Shane
Parallel IO over Cray XT is supported by a vendor-supplied MPI-IO package. This package contains a proprietary ADIO implementation built on top of the sysio library. While it is reasonable to maintain a stable code base for application scientists' convenience, it is also very important to the system developers and researchers to analyze and assess the effectiveness of parallel IO software, and accordingly, tune and optimize the MPI-IO implementation. A proprietary parallel IO code base relinquishes such flexibilities. On the other hand, a generic UFS-based MPI-IO implementation is typically used on many Linux-based platforms. We have developed an open-source MPI-IOmore » package over Lustre, referred to as OPAL (OPportunistic and Adaptive MPI-IO Library over Lustre). OPAL provides a single source-code base for MPI-IO over Lustre on Cray XT and Linux platforms. Compared to Cray implementation, OPAL provides a number of good features, including arbitrary specification of striping patterns and Lustre-stripe aligned file domain partitioning. This paper presents the performance comparisons between OPAL and Cray's proprietary implementation. Our evaluation demonstrates that OPAL achieves the performance comparable to the Cray implementation. We also exemplify the benefits of an open source package in revealing the underpinning of the parallel IO performance.« less
Tycho 2: A Proxy Application for Kinetic Transport Sweeps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, Charles Kristopher; Warsa, James S.
2016-09-14
Tycho 2 is a proxy application that implements discrete ordinates (SN) kinetic transport sweeps on unstructured, 3D, tetrahedral meshes. It has been designed to be small and require minimal dependencies to make collaboration and experimentation as easy as possible. Tycho 2 has been released as open source software. The software is currently in a beta release with plans for a stable release (version 1.0) before the end of the year. The code is parallelized via MPI across spatial cells and OpenMP across angles. Currently, several parallelization algorithms are implemented.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin
2016-01-01
Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.
A network approach to decentralized coordination of energy production-consumption grids.
Omodei, Elisa; Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems.
Fenoterol stimulates human erythropoietin production via activation of the renin angiotensin system
Freudenthaler, S M; Schenck, T; Lucht, I; Gleiter, C H
1999-01-01
Aims The present study assessed the hypothesis that the β2 sympathomimetic fenoterol influences the production of erythropoietin (EPO) by activation of the renin angiotensin system (RAS), i.e. angiotensin II. Methods In an open, parallel, randomized study healthy volunteers received i.v. either placebo (electrolyte solution), fenoterol or fenoterol in combination with an oral dose of the AT1-receptor antagonist losartan. Results Compared with placebo treatment AUCEPO(0,24 h) was significantly increased after fenoterol application by 48% whereas no increase in the group receiving fenoterol and losartan could be detected. The rise of PRA was statistically significant under fenoterol and fenoterol plus lorsartan. Conclusions Stimulation of EPO production during fenoterol infusion appears to be angiotensin II-mediated. Thus, angiotensin II may be considered as one important physiological modulator of EPO production in humans. PMID:10583037
Fenoterol stimulates human erythropoietin production via activation of the renin angiotensin system.
Freudenthaler, S M; Schenck, T; Lucht, I; Gleiter, C H
1999-10-01
The present study assessed the hypothesis that the beta2 sympathomimetic fenoterol influences the production of erythropoietin (EPO) by activation of the renin angiotensin system (RAS), i.e. angiotensin II. In an open, parallel, randomized study healthy volunteers received i.v. either placebo (electrolyte solution), fenoterol or fenoterol in combination with an oral dose of the AT1-receptor antagonist losartan. Compared with placebo treatment AUCEPO(0,24 h) was significantly increased after fenoterol application by 48% whereas no increase in the group receiving fenoterol and losartan could be detected. The rise of PRA was statistically significant under fenoterol and fenoterol plus lorsartan. Stimulation of EPO production during fenoterol infusion appears to be angiotensin II-mediated. Thus, angiotensin II may be considered as one important physiological modulator of EPO production in humans.
NASA Astrophysics Data System (ADS)
Moreto, Jose; Liu, Xiaofeng
2017-11-01
The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.
Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.
Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru
2016-04-28
Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Multi-threading: A new dimension to massively parallel scientific computation
NASA Astrophysics Data System (ADS)
Nielsen, Ida M. B.; Janssen, Curtis L.
2000-06-01
Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M
2014-05-01
Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
Effect of alignment of easy axes on dynamic magnetization of immobilized magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Yoshida, Takashi; Matsugi, Yuki; Tsujimura, Naotaka; Sasayama, Teruyoshi; Enpuku, Keiji; Viereck, Thilo; Schilling, Meinhard; Ludwig, Frank
2017-04-01
In some biomedical applications of magnetic nanoparticles (MNPs), the particles are physically immobilized. In this study, we explore the effect of the alignment of the magnetic easy axes on the dynamic magnetization of immobilized MNPs under an AC excitation field. We prepared three immobilized MNP samples: (1) a sample in which easy axes are randomly oriented, (2) a parallel-aligned sample in which easy axes are parallel to the AC field, and (3) an orthogonally aligned sample in which easy axes are perpendicular to the AC field. First, we show that the parallel-aligned sample has the largest hysteresis in the magnetization curve and the largest harmonic magnetization spectra, followed by the randomly oriented and orthogonally aligned samples. For example, 1.6-fold increase was observed in the area of the hysteresis loop of the parallel-aligned sample compared to that of the randomly oriented sample. To quantitatively discuss the experimental results, we perform a numerical simulation based on a Fokker-Planck equation, in which probability distributions for the directions of the easy axes are taken into account in simulating the prepared MNP samples. We obtained quantitative agreement between experiment and simulation. These results indicate that the dynamic magnetization of immobilized MNPs is significantly affected by the alignment of the easy axes.
Kokkos: Enabling manycore performance portability through polymorphic memory access patterns
Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel
2014-07-22
The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less
cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.
Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro
2016-01-01
Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
Extendability of parallel sections in vector bundles
NASA Astrophysics Data System (ADS)
Kirschner, Tim
2016-01-01
I address the following question: Given a differentiable manifold M, what are the open subsets U of M such that, for all vector bundles E over M and all linear connections ∇ on E, any ∇-parallel section in E defined on U extends to a ∇-parallel section in E defined on M? For simply connected manifolds M (among others) I describe the entirety of all such sets U which are, in addition, the complement of a C1 submanifold, boundary allowed, of M. This delivers a partial positive answer to a problem posed by Antonio J. Di Scala and Gianni Manno (2014). Furthermore, in case M is an open submanifold of Rn, n ≥ 2, I prove that the complement of U in M, not required to be a submanifold now, can have arbitrarily large n-dimensional Lebesgue measure.
Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana
2016-10-26
It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (-500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO 2max ) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η ² = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η ² = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η ² = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η ² = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning.
Allinea Parallel Profiling and Debugging Tools on the Peregrine System |
client for your platform. (Mac/Windows/Linux) Configuration to connect to Peregrine: Open the Allinea view it # directly through x11 forwarding just type 'map', # it will open a GUI. $ map # to profile an enable x-forwarding when connecting to # Peregrine. $ map # This will open the GUI Debugging using
NASA Technical Reports Server (NTRS)
Nagano, S.
1979-01-01
Base driver with common-load-current feedback protects paralleled inverter systems from open or short circuits. Circuit eliminates total system oscillation that can occur in conventional inverters because of open circuit in primary transformer winding. Common feedback signal produced by functioning modules forces operating frequency of failed module to coincide with clock drive so module resumes normal operating frequency in spite of open circuit.
Methodology Series Module 4: Clinical Trials.
Setia, Maninder Singh
2016-01-01
In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an "open trial." However, many of the trials are not open - they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India.
Methodology Series Module 4: Clinical Trials
Setia, Maninder Singh
2016-01-01
In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an “open trial.” However, many of the trials are not open – they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India. PMID:27512184
de Luis, D A; Izaola, O; de la Fuente, B; Terroba, M C; Cuellar, L; Cabezas, G
2013-06-01
The aim of our study was to investigate whether two different daily doses of a high monounsaturated fatty acid (MUFA) specific diabetes enteral formula could improve nutritional variables as well as metabolic parameters. We conducted a randomized, open-label, multicenter, parallel group study. 27 patients with diabetes mellitus type 2 with recent weight loss were randomized to one of two study groups: group 1 (two cans per day) and group 2 (three cans per day) for a ten week period. A significative decrease of HbA1c was detected in both groups. The decrease 0.98% (confidence interval 95% 0.19-1.88) was higher in group 2 than group 1 0.60% (confidence interval 95% 0.14-1.04). A significant increase of weight, body mass index, fat mass, albumin, prealbumin and transferrin was observed in both groups without statistical differences in this improvement between both groups. The increase of weight 4.59kg (confidence interval 95% 1.71-9.49) was higher in group 2 than group 1 1.46% (confidence interval 95% 0.39-2.54). Gastrointestinal tolerance (diarrhea episodes) with both formulas was good, without statistical differences (7.60% vs 7.14%: ns). A high monounsaturated fatty acid diabetes-specific supplement improved HbA1c and nutritional status. These improvements were higher with three supplements than with two per day.
NASA Astrophysics Data System (ADS)
Akibue, Seiseki; Kato, Go
2018-04-01
For distinguishing quantum states sampled from a fixed ensemble, the gap in bipartite and single-party distinguishability can be interpreted as a nonlocality of the ensemble. In this paper, we consider bipartite state discrimination in a composite system consisting of N subsystems, where each subsystem is shared between two parties and the state of each subsystem is randomly sampled from a particular ensemble comprising the Bell states. We show that the success probability of perfectly identifying the state converges to 1 as N →∞ if the entropy of the probability distribution associated with the ensemble is less than 1, even if the success probability is less than 1 for any finite N . In other words, the nonlocality of the N -fold ensemble asymptotically disappears if the probability distribution associated with each ensemble is concentrated. Furthermore, we show that the disappearance of the nonlocality can be regarded as a remarkable counterexample of a fundamental open question in theoretical computer science, called a parallel repetition conjecture of interactive games with two classically communicating players. Measurements for the discrimination task include a projective measurement of one party represented by stabilizer states, which enable the other party to perfectly distinguish states that are sampled with high probability.
DRY CUPPING IN CHILDREN WITH FUNCTIONAL CONSTIPATION: A RANDOMIZED OPEN LABEL CLINICAL TRIAL
Shahamat, Mahmoud; Daneshfard, Babak; Najib, Khadijeh-Sadat; Dehghani, Seyed Mohsen; Tafazoli, Vahid; Kasalaei, Afshineh
2016-01-01
Background: As a common disease in pediatrics, constipation poses a high burden to the community. In this study, we aimed to investigate the efficacy of dry cupping therapy (an Eastern traditional manipulative therapy) in children with functional constipation. Materials and Methods: One hundred and twenty children (4-18 years old) diagnosed as functional constipation according to ROME III criteria were assigned to receive a traditional dry cupping protocol on the abdominal wall for 8 minutes every other day or standard laxative therapy (Polyethylene glycol (PEG) 40% solution without electrolyte), 0.4 g/kg once daily) for 4 weeks, in an open label randomized controlled clinical trial using a parallel design with a 1:1 allocation ratio. Patients were evaluated prior to and following 2, 4, 8 and 12 weeks of the intervention commencement in terms of the ROME III criteria for functional constipation. Results: There were no significant differences between the two arms regarding demographic and clinical basic characteristics. After two weeks of the intervention, there was a significant better result in most of the items of ROME III criteria of patients in PEG group. In contrast, after four weeks of the intervention, the result was significantly better in the cupping group. There was no significant difference in the number of patients with constipation after 4 and 8 weeks of the follow-up period. Conclusion: This study showed that dry cupping of the abdominal wall, as a traditional manipulative therapy, can be as effective as standard laxative therapy in children with functional constipation. PMID:28852716
Jain, Jay Prakash; Leong, F Joel; Chen, Lan; Kalluri, Sampath; Koradia, Vishal; Stein, Daniel S; Wolf, Marie-Christine; Sunkara, Gangadhar; Kota, Jagannath
2017-09-01
The artemether-lumefantrine combination requires food intake for the optimal absorption of lumefantrine. In an attempt to enhance the bioavailability of lumefantrine, new solid dispersion formulations (SDF) were developed, and the pharmacokinetics of two SDF variants were assessed in a randomized, open-label, sequential two-part study in healthy volunteers. In part 1, the relative bioavailability of the two SDF variants was compared with that of the conventional formulation after administration of a single dose of 480 mg under fasted conditions in three parallel cohorts. In part 2, the pharmacokinetics of lumefantrine from both SDF variants were evaluated after a single dose of 480 mg under fed conditions and a single dose of 960 mg under fasted conditions. The bioavailability of lumefantrine from SDF variant 1 and variant 2 increased up to ∼48-fold and ∼24-fold, respectively, relative to that of the conventional formulation. Both variants demonstrated a positive food effect and a less than proportional increase in exposure between the 480-mg and 960-mg doses. Most adverse events (AEs) were mild to moderate in severity and not suspected to be related to the study drug. All five drug-related AEs occurred in subjects taking SDF variant 2. No clinically significant treatment-emergent changes in vital signs, electrocardiograms, or laboratory blood assessments were noted. The solid dispersion formulation enhances the lumefantrine bioavailability to a significant extent, and SDF variant 1 is superior to SDF variant 2. Copyright © 2017 Jain et al.
Lidcombe Program Webcam Treatment for Early Stuttering: A Randomized Controlled Trial.
Bridgman, Kate; Onslow, Mark; O'Brian, Susan; Jones, Mark; Block, Susan
2016-10-01
Webcam treatment is potentially useful for health care in cases of early stuttering in which clients are isolated from specialized treatment services for geographic and other reasons. The purpose of the present trial was to compare outcomes of clinic and webcam deliveries of the Lidcombe Program treatment (Packman et al., 2015) for early stuttering. The design was a parallel, open plan, noninferiority randomized controlled trial of the standard Lidcombe Program treatment and the experimental webcam Lidcombe Program treatment. Participants were 49 children aged 3 years 0 months to 5 years 11 months at the start of treatment. Primary outcomes were the percentage of syllables stuttered at 9 months postrandomization and the number of consultations to complete Stage 1 of the Lidcombe Program. There was insufficient evidence of a posttreatment difference of the percentage of syllables stuttered between the standard and webcam Lidcombe Program treatments. There was insufficient evidence of a difference between the groups for typical stuttering severity measured by parents or the reported clinical relationship with the treating speech-language pathologist. This trial confirmed the viability of the webcam Lidcombe Program intervention. It appears to be as efficacious and economically viable as the standard, clinic Lidcombe Program treatment.
Efficacy of citalopram and moclobemide in patients with social phobia: some preliminary findings.
Atmaca, Murad; Kuloglu, Murat; Tezcan, Ertan; Unal, Ahmet
2002-12-01
The efficacy of irreversible and reversible monoamine oxidase inhibitors (MAOIs) in the treatment of social phobia (SP) is well established. Recently, selective serotonin reuptake inhibitors (SSRIs) have been used more frequently. In the present study, the efficacy and side-effect profile of citalopram, an SSRI, and moclobemide, the only MAOI used in Turkey, were compared. The 71 patients diagnosed with SP according to DSM-III-R were randomly assigned to two subgroups; citalopram (n = 36) or moclobemide (n = 35). The study was an 8-week, randomized, open-label, rater-blinded, parallel-group trial. All patients were assessed by Hamilton anxiety rating (HAM-A), Liebowitz social anxiety (LSAS), clinical global impression-severity of illness (CGI-SI) and clinical global impression-improvement (CGI-I) scales. There was a similar percentage of responders (citalopram 75%, n = 27 and moclobemide 74.3%, n = 26), with a >50% or greater reduction in LSAS total score and ratings of "very much" or "much improved" on the CGI-I. None of the patients withdrew from the study. The results of the present study suggest that citalopram has shown promising results in patients with SP. Copyright 2002 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
Emission of sound from turbulence convected by a parallel flow in the presence of solid boundaries
NASA Technical Reports Server (NTRS)
Goldstein, M. E.; Rosenbaum, B. M.
1973-01-01
A theoretical description is given of the sound emitted from an arbitrary point in a parallel or nearly parallel turbulent shear flow confined to a region near solid boundaries. The analysis begins with Lighthill's formulation of aerodynamic noise and assumes that the turbulence is axisymmetric. Specific results are obtained for the sound emitted from an arbitrary point in a turbulent flow within a semi-infinite, open-ended duct.
Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa
2012-11-01
Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.
Methods of parallel computation applied on granular simulations
NASA Astrophysics Data System (ADS)
Martins, Gustavo H. B.; Atman, Allbens P. F.
2017-06-01
Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.
Effective Vectorization with OpenMP 4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less
Spin-orbit coupling and the static polarizability of single-wall carbon nanotubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diniz, Ginetom S., E-mail: ginetom@gmail.com; Ulloa, Sergio E.
2014-07-14
We calculate the static longitudinal polarizability of single-wall carbon tubes in the long wavelength limit taking into account spin-orbit effects. We use a four-orbital orthogonal tight-binding formalism to describe the electronic states and the random phase approximation to calculate the dielectric function. We study the role of both the Rashba as well as the intrinsic spin-orbit interactions on the longitudinal dielectric response, i.e., when the probing electric field is parallel to the nanotube axis. The spin-orbit interaction modifies the nanotube electronic band dispersions, which may especially result in a small gap opening in otherwise metallic tubes. The bandgap size andmore » state features, the result of competition between Rashba and intrinsic spin-orbit interactions, result in drastic changes in the longitudinal static polarizability of the system. We discuss results for different nanotube types and the dependence on nanotube radius and spin-orbit couplings.« less
A network approach to decentralized coordination of energy production-consumption grids
Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems. PMID:29364962
NASA Technical Reports Server (NTRS)
Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.;
2003-01-01
This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.
JANUS: A Compilation System for Balancing Parallelism and Performance in OpenVX
NASA Astrophysics Data System (ADS)
Omidian, Hossein; Lemieux, Guy G. F.
2018-04-01
Embedded systems typically do not have enough on-chip memory for entire an image buffer. Programming systems like OpenCV operate on entire image frames at each step, making them use excessive memory bandwidth and power. In contrast, the paradigm used by OpenVX is much more efficient; it uses image tiling, and the compilation system is allowed to analyze and optimize the operation sequence, specified as a compute graph, before doing any pixel processing. In this work, we are building a compilation system for OpenVX that can analyze and optimize the compute graph to take advantage of parallel resources in many-core systems or FPGAs. Using a database of prewritten OpenVX kernels, it automatically adjusts the image tile size as well as using kernel duplication and coalescing to meet a defined area (resource) target, or to meet a specified throughput target. This allows a single compute graph to target implementations with a wide range of performance needs or capabilities, e.g. from handheld to datacenter, that use minimal resources and power to reach the performance target.
DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation
NASA Astrophysics Data System (ADS)
Sousbie, Thierry
2018-01-01
DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.
Improvisation and Meditation in the Academy: Parallel Ordeals, Insights, and Openings
ERIC Educational Resources Information Center
Sarath, Edward
2015-01-01
This article examines parallel challenges and avenues for progress I have observed in my efforts to introduce improvisation in classical music studies, and meditation in music and overall academic settings. Though both processes were once central in their respective knowledge traditions--improvisation in earlier eras of European classical music,…
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
Reconfigurable Model Execution in the OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Hwang, John T.
2017-01-01
NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.
Abdelnour, Arturo; Silas, Peter E; Lamas, Marta Raquel Valdés; Aragón, Carlos Fernándo Grazioso; Chiu, Nan-Chang; Chiu, Cheng-Hsun; Acuña, Teobaldo Herrera; Castrejón, Tirza De León; Izu, Allen; Odrljin, Tatjana; Smolenov, Igor; Hohenboken, Matthew; Dull, Peter M
2014-02-12
The highest risk for invasive meningococcal disease (IMD) is in infants aged <1 year. Quadrivalent meningococcal conjugate vaccination has the potential to prevent IMD caused by serogroups A, C, W and Y. This phase 3b, multinational, open-label, randomized, parallel-group, multicenter study evaluated the safety of a 4-dose series of MenACWY-CRM, a quadrivalent meningococcal conjugate vaccine, concomitantly administered with routine vaccinations to healthy infants. Two-month-old infants were randomized 3:1 to receive MenACWY-CRM with routine vaccines or routine vaccines alone at ages 2, 4, 6 and 12 months. Adverse events (AEs) that were medically attended and serious adverse events (SAEs) were collected from all subjects from enrollment through 18 months of age. In a subset, detailed safety data (local and systemic solicited reactions and all AEs) were collected for 7 days post vaccination. The primary objective was a non-inferiority comparison of the percentages of subjects with ≥1 severe systemic reaction during Days 1-7 after any vaccination of MenACWY-CRM plus routine vaccinations versus routine vaccinations alone (criterion: upper limit of 95% confidence interval [CI] of group difference <6%). A total of 7744 subjects were randomized with 1898 in the detailed safety arm. The percentage of subjects with severe systemic reactions was 16% after MenACWY-CRM plus routine vaccines and 13% after routine vaccines alone (group difference 3.0% (95% CI -0.8, 6.4%). Although the non-inferiority criterion was not met, post hoc analysis controlling for significant center and group-by-center differences revealed that MenACWY-CRM plus routine vaccinations was non-inferior to routine vaccinations alone (group difference -0.1% [95% CI -4.9%, 4.7%]). Rates of solicited AEs, medically attended AEs, and SAEs were similar across groups. In a large multinational safety study, a 4-dose series of MenACWY-CRM concomitantly administered with routine vaccines was clinically acceptable with a similar safety profile to routine vaccines given alone. Copyright © 2013 Elsevier Ltd. All rights reserved.
Paris, A; Gonnet, N; Chaussard, C; Belon, P; Rocourt, F; Saragaglia, D; Cracowski, J L
2008-01-01
Aims The efficacy of homeopathy is still under debate. The objective of this study was to assess the efficacy of homeopathic treatment (Arnica montana 5 CH, Bryonia alba 5 CH, Hypericum perforatum 5 CH and Ruta graveolens 3 DH) on cumulated morphine intake delivered by PCA over 24 h after knee ligament reconstruction. Methods This was an add-on randomized controlled study with three parallel groups: a double-blind homeopathic or placebo arm and an open-label noninterventional control arm. Eligible patients were 18–60 years old candidates for surgery of the anterior cruciate ligament. Treatment was administered the evening before surgery and continued for 3 days. The primary end-point was cumulated morphine intake delivered by PCA during the first 24 h inferior or superior/equal to 10 mg day−1. Results One hundred and fifty-eight patients were randomized (66 in the placebo arm, 67 in the homeopathic arm and 25 in the noninterventional group). There was no difference between the treated and the placebo group for primary end-point (mean (95% CI) 48% (35.8, 56.3), and 56% (43.7, 68.3), required less than 10 mg day−1 of morphine in each group, respectively). The homeopathy treatment had no effect on morphine intake between 24 and 72 h or on the visual analogue pain scale, or on quality of life assessed by the SF-36 questionnaire. In addition, these parameters were not different in patients enrolled in the open-label noninterventional control arm. Conclusions The complex of homeopathy tested in this study was not superior to placebo in reducing 24 h morphine consumption after knee ligament reconstruction. What is already known about this subject The efficacy of homeopathy is still under debate and a recent meta-analysis recommended further randomized double-blind clinical trials to identify any clinical situation in which homeopathy might be effective. What this study adds The complex of homeopathy tested in this study (Arnica montana 5 CH, Bryonia alba 5 CH, Hypericum perforatum 5 CH and Ruta graveolens 3 DH) is not superior to placebo in reducing 24 h morphine consumption after knee ligament reconstruction. PMID:18251757
A randomized clinical trial of histamine 2 receptor antagonism in treatment-resistant schizophrenia.
Meskanen, Katarina; Ekelund, Heidi; Laitinen, Jarmo; Neuvonen, Pertti J; Haukka, Jari; Panula, Pertti; Ekelund, Jesper
2013-08-01
Histamine has important functions as regulator of several other key neurotransmitters. Patients with schizophrenia have lower histamine H1 receptor levels. Since a case report in 1990 of an effect of the H2 antagonist famotidine on negative symptoms in schizophrenia, some open-label trials have been performed, but no randomized controlled trial. Recently, it was shown that clozapine is a full inverse agonist at the H2 receptor. We performed a researcher-initiated, academically financed, double-blind, placebo-controlled, parallel-group, randomized trial with the histamine H2 antagonist famotidine in treatment-resistant schizophrenia. Thirty subjects with schizophrenia were randomized to have either famotidine (100 mg twice daily, n = 16) or placebo (n = 14) orally, added to their normal treatment regimen for 4 weeks. They were followed up weekly with the Scale for the Assessment of Negative Symptoms (SANS), the PANSS (Positive and Negative Syndrome Scale), and Clinical Global Impression (CGI) Scale. In the famotidine group, the SANS score was reduced by 5.3 (SD, 13.1) points, whereas in the placebo group the SANS score was virtually unchanged (mean change, +0.2 [SD, 9.5]). The difference did not reach statistical significance (P = 0.134) in Mann-Whitney U analysis. However, the PANSS Total score and the General subscore as well as the CGI showed significantly (P < 0.05) greater change in the famotidine group than in the placebo group. No significant adverse effects were observed. This is the first placebo-controlled, randomized clinical trial showing a beneficial effect of histamine H2 antagonism in schizophrenia. H2 receptor antagonism may provide a new alternative for the treatment of schizophrenia.
Mahon, Susan; Krishnamurthi, Rita; Vandal, Alain; Witt, Emma; Barker-Collo, Suzanne; Parmar, Priya; Theadom, Alice; Barber, Alan; Arroll, Bruce; Rush, Elaine; Elder, Hinemoa; Dyer, Jesse; Feigin, Valery
2018-02-01
Rationale Stroke is a major cause of death and disability worldwide, yet 80% of strokes can be prevented through modifications of risk factors and lifestyle and by medication. While management strategies for primary stroke prevention in high cardiovascular disease risk individuals are well established, they are underutilized and existing practice of primary stroke prevention are inadequate. Behavioral interventions are emerging as highly promising strategies to improve cardiovascular disease risk factor management. Health Wellness Coaching is an innovative, patient-focused and cost-effective, multidimensional psychological intervention designed to motivate participants to adhere to recommended medication and lifestyle changes and has been shown to improve health and enhance well-being. Aims and/or hypothesis To determine the effectiveness of Health Wellness Coaching for primary stroke prevention in an ethnically diverse sample including Māori, Pacific Island, New Zealand European and Asian participants. Design A parallel, prospective, randomized, open-treatment, single-blinded end-point trial. Participants include 320 adults with absolute five-year cardiovascular disease risk ≥ 10%, calculated using the PREDICT web-based clinical tool. Randomization will be to Health Wellness Coaching or usual care groups. Participants randomized to Health Wellness Coaching will receive 15 coaching sessions over nine months. Study outcomes A substantial relative risk reduction of five-year cardiovascular disease risk at nine months post-randomization, which is defined as 10% relative risk reduction among those at moderate five-year cardiovascular disease risk (10-15%) and 25% among those at high risk (>15%). Discussion This clinical trial will determine whether Health Wellness Coaching is an effective intervention for reducing modifiable risk factors, and hence decrease the risk of stroke and cardiovascular disease.
Paging memory from random access memory to backing storage in a parallel computer
Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E
2013-05-21
Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.
NASA Astrophysics Data System (ADS)
Qiang, Ji
2017-10-01
A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.
Abe, Masanori; Higuchi, Terumi; Moriuchi, Masari; Okamura, Masahiro; Tei, Ritsukou; Nagura, Chinami; Takashima, Hiroyuki; Kikuchi, Fumito; Tomita, Hyoe; Okada, Kazuyoshi
2016-06-01
Saxagliptin is a dipeptidyl peptidase-4 inhibitor that was approved in Japan for the treatment of type 2 diabetes in 2013. We examined its efficacy and safety in Japanese hemodialysis patients with diabetic nephropathy. In this prospective, open-label, parallel-group study, Japanese hemodialysis patients were randomized to receive either oral saxagliptin (2.5mg/day) or usual care (control group) for 24weeks. Before randomization, patients received fixed doses of conventional antidiabetic drugs (oral drugs and/or insulin) for 8weeks; these drugs were continued during the study. Endpoints included changes in glycated albumin (GA), hemoglobin A1c (HbA1c), postprandial plasma glucose (PPG), and adverse events. Both groups included 41 patients. Mean GA, HbA1c, and PPG decreased significantly in the saxagliptin group (-3.4%, -0.6% [-7mmol/mol], and -38.3mg/dL, respectively; all P<0.0001) but not in the control group (0%, -0.1% [-1mmol/mol], and -3.7mg/dL, respectively) (P<0.0001, P<0.001, and P<0.0001, respectively). In saxagliptin-treated patients, the reduction in GA was significantly greater when saxagliptin was administered as monotherapy than in combination therapy (-4.2% vs. -3.0%, P=0.012) despite similar baseline values (24.5% vs. 23.3%). Reductions in GA, HbA1c, and PPG were greater in patients whose baseline values exceeded the median (23.8% for GA, 6.6% for HbA1c, and 180mg/dL for PPG). There were no adverse events associated with saxagliptin. Saxagliptin (2.5mg/day) was effective and well tolerated when used as monotherapy or combined with other antidiabetic drugs in Japanese hemodialysis patients with type 2 diabetes. UMIN000018445. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Kadouch, D J; Elshot, Y S; Zupan-Kajcovski, B; van Haersma de With, A S E; van der Wal, A C; Leeflang, M; Jóźwiak, K; Wolkerstorfer, A; Bekkenk, M W; Spuls, P I; de Rie, M A
2017-09-01
Routine punch biopsies are considered to be standard care for diagnosing and subtyping basal cell carcinoma (BCC) when clinically suspected. We assessed the efficacy of a one-stop-shop concept using in vivo reflectance confocal microscopy (RCM) imaging as a diagnostic tool vs. standard care for surgical treatment in patients with clinically suspected BCC. In this open-label, parallel-group, noninferiority, randomized controlled multicentre trial we enrolled patients with clinically suspected BCC at two tertiary referral centres in Amsterdam, the Netherlands. Patients were randomly assigned to the RCM one-stop-shop (diagnosing and subtyping using RCM followed by direct surgical excision) or standard care (planned excision based on the histological diagnosis and subtype of a punch biopsy). The primary outcome was the proportion of patients with tumour-free margins after surgical excision of BCC. Of the 95 patients included, 73 (77%) had a BCC histologically confirmed using a surgical excision specimen. All patients (40 of 40, 100%) in the one-stop-shop group had tumour-free margins. In the standard-care group tumour-free margins were found in all but two patients (31 of 33, 94%). The difference in the proportion of patients with tumour-free margins after BCC excision between the one-stop-shop group and the standard-care group was -0·06 (90% confidence interval -0·17-0·01), establishing noninferiority. The proposed new treatment strategy seems suitable in facilitating early diagnosis and direct treatment for patients with BCC, depending on factors such as availability of RCM, size and site of the lesion, patient preference and whether direct surgical excision is feasible. © 2017 The Authors. British Journal of Dermatology published by John Wiley & Sons Ltd on behalf of British Association of Dermatologists.
Garg, Satish K; Mathieu, Chantal; Rais, Nadeem; Gao, Haitao; Tobian, Janet A; Gates, Jeffrey R; Ferguson, Jeffrey A; Webb, David M; Berclaz, Pierre-Yves
2009-09-01
Patients with type 1 diabetes require intensive insulin therapy for optimal glycemic control. AIR((R)) inhaled insulin (system from Eli Lilly and Company, Indianapolis, IN) (AIR is a registered trademark of Alkermes, Inc., Cambridge, MA) may be an efficacious and safe alternative to subcutaneously injected (SC) mealtime insulin. This was a Phase 3, 2-year, randomized, open-label, active-comparator, parallel-group study in 385 patients with type 1 diabetes who were randomly assigned to receive AIR insulin or SC insulin (regular human insulin or insulin lispro) at mealtimes. Both groups received insulin glargine once daily. Efficacy measures included mean change in hemoglobin A1C (A1C) from baseline to end point, eight-point self-monitored blood glucose profiles, and insulin dosage. Safety assessments included hypoglycemic events, pulmonary function tests, adverse events, and insulin antibody levels. In both treatment groups, only 20% of subjects reached the target of A1C <7.0%. A significant A1C difference of 0.44% was seen favoring SC insulin, with no difference between the groups in insulin doses or hypoglycemic events at end point. Patients in both treatment groups experienced progressive decreases in lung function, but larger (reversible) decrements in diffusing capacity of the lung for carbon monoxide (DL(CO)) were associated with AIR insulin treatment. Greater weight gain was seen with SC insulin treatment. The AIR inhaled insulin program was terminated by the sponsor prior to availability of any Phase 3 data for reasons unrelated to safety or efficacy. Despite early termination, this trial provides evidence that AIR insulin was less efficacious in lowering A1C and was associated with a greater decrease in DL(CO) and increased incidence of cough than SC insulin in patients with type 1 diabetes.
Mischoulon, David; Shelton, Richard C; Baer, Lee; Bobo, William V; Curren, Laura; Fava, Maurizio; Papakostas, George I
2017-04-01
To examine motoric, cardiovascular, endocrine, and metabolic effects of adjunctive ziprasidone in adults with major depressive disorder (MDD) and prior nonresponse to 8 weeks of open-label escitalopram. A multicenter, parallel, randomized, double-blind, placebo-controlled trial was conducted at 3 US academic medical centers from July 2008 to October 2013. Recruited were 139 outpatients with persistent DSM-IV MDD following an 8-week open-label trial of escitalopram. Subjects were then randomized to adjunctive ziprasidone (escitalopram + ziprasidone, n = 71) or placebo (escitalopram + placebo, n = 68) for 8 additional weeks. Cardiac and metabolic measures were obtained at each treatment visit. Barnes Akathisia Scale and Abnormal Involuntary Movement Scale (AIMS) scores were also obtained. Changes in outcome measures for each treatment group were compared by independent-samples t test. A trend toward significance (P = .06) in corrected QT interval (QTc) increase was observed for ziprasidone (mean [SD] = 8.8 [20.2] milliseconds) versus placebo (-0.02 [25.5] milliseconds). Ziprasidone-treated patients had a significantly greater increase in global akathisia scores (P = .01) and significant weight increase (mean [SD] = 3.5 [11.8] kg, or 7.7 [26.1] lb) compared to placebo (1.0 [6.4] kg, or 2.2 [14.1] lb) (P = .03). No significant changes in AIMS scores were observed for either treatment group. Adjunctive ziprasidone, added to escitalopram, led to a greater weight gain and greater but modest akathisia compared to placebo. The effect of ziprasidone on QTc showed a trend toward significance, and therefore caution should be used in the administration of ziprasidone. While ziprasidone augmentation in patients with MDD appears safe, precautions should be taken in practice, specifically regular monitoring of electrocardiogram, weight, extrapyramidal symptoms, and involuntary movements. ClinicalTrials.gov identifier: NCT00633399. © Copyright 2016 Physicians Postgraduate Press, Inc.
2012-01-01
Background Reducing low-density lipoprotein cholesterol (LDL-C) is associated with reduced risk for major coronary events. Despite statin efficacy, a considerable proportion of statin-treated hypercholesterolemic patients fail to reach therapeutic LDL-C targets as defined by guidelines. This study compared the efficacy of ezetimibe added to ongoing statins with doubling the dose of ongoing statin in a population of Taiwanese patients with hypercholesterolemia. Methods This was a randomized, open-label, parallel-group comparison study of ezetimibe 10 mg added to ongoing statin compared with doubling the dose of ongoing statin. Adult Taiwanese hypercholesterolemic patients not at optimal LDL-C levels with previous statin treatment were randomized (N = 83) to ongoing statin + ezetimibe (simvastatin, atorvastatin or pravastatin + ezetimibe at doses of 20/10, 10/10 or 20/10 mg) or doubling the dose of ongoing statin (simvastatin 40 mg, atorvastatin 20 mg or pravastatin 40 mg) for 8 weeks. Percent change in total cholesterol, LDL-C, high-density lipoprotein cholesterol (HDL-C) and triglycerides, and specified safety parameters were assessed at 4 and 8 weeks. Results At 8 weeks, patients treated with statin + ezetimibe experienced significantly greater reductions compared with doubling the statin dose in LDL-C (26.2% vs 17.9%, p = 0.0026) and total cholesterol (20.8% vs 12.2%, p = 0.0003). Percentage of patients achieving treatment goal was greater for statin + ezetimibe (58.6%) vs doubling statin (41.2%), but the difference was not statistically significant (p = 0.1675). The safety and tolerability profiles were similar between treatments. Conclusion Ezetimibe added to ongoing statin therapy resulted in significantly greater lipid-lowering compared with doubling the dose of statin in Taiwanese patients with hypercholesterolemia. Studies to assess clinical outcome benefit are ongoing. Trial registration Registered at ClinicalTrials.gov: NCT00652327 PMID:22621316
SWMM5 Application Programming Interface and PySWMM: A Python Interfacing Wrapper
In support of the OpenWaterAnalytics open source initiative, the PySWMM project encompasses the development of a Python interfacing wrapper to SWMM5 with parallel ongoing development of the USEPA Stormwater Management Model (SWMM5) application programming interface (API). ...
Datacube Services in Action, Using Open Source and Open Standards
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2016-12-01
Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
OpenMP performance for benchmark 2D shallow water equations using LBM
NASA Astrophysics Data System (ADS)
Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry
2018-03-01
Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Poteat, Michael; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2016-01-01
In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23X was measured for MPI+SMPI, but only 10X was measured for MPI+OpenMP.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Leveraging human oversight and intervention in large-scale parallel processing of open-source data
NASA Astrophysics Data System (ADS)
Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.
2015-05-01
The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.
NASA Astrophysics Data System (ADS)
Frickenhaus, Stephan; Hiller, Wolfgang; Best, Meike
The portable software FoSSI is introduced that—in combination with additional free solver software packages—allows for an efficient and scalable parallel solution of large sparse linear equations systems arising in finite element model codes. FoSSI is intended to support rapid model code development, completely hiding the complexity of the underlying solver packages. In particular, the model developer need not be an expert in parallelization and is yet free to switch between different solver packages by simple modifications of the interface call. FoSSI offers an efficient and easy, yet flexible interface to several parallel solvers, most of them available on the web, such as PETSC, AZTEC, MUMPS, PILUT and HYPRE. FoSSI makes use of the concept of handles for vectors, matrices, preconditioners and solvers, that is frequently used in solver libraries. Hence, FoSSI allows for a flexible treatment of several linear equations systems and associated preconditioners at the same time, even in parallel on separate MPI-communicators. The second special feature in FoSSI is the task specifier, being a combination of keywords, each configuring a certain phase in the solver setup. This enables the user to control a solver over one unique subroutine. Furthermore, FoSSI has rather similar features for all solvers, making a fast solver intercomparison or exchange an easy task. FoSSI is a community software, proven in an adaptive 2D-atmosphere model and a 3D-primitive equation ocean model, both formulated in finite elements. The present paper discusses perspectives of an OpenMP-implementation of parallel iterative solvers based on domain decomposition methods. This approach to OpenMP solvers is rather attractive, as the code for domain-local operations of factorization, preconditioning and matrix-vector product can be readily taken from a sequential implementation that is also suitable to be used in an MPI-variant. Code development in this direction is in an advanced state under the name ScOPES: the Scalable Open Parallel sparse linear Equations Solver.
Goebel, Andreas; Bisla, Jatinder; Carganillo, Roy; Frank, Bernhard; Gupta, Rima; Kelly, Joanna; McCabe, Candy; Murphy, Caroline; Padfield, Nick; Phillips, Ceri; Sanders, Mark; Serpell, Mick; Shenker, Nick; Shoukrey, Karim; Wyatt, Lynne; Ambler, Gareth
2017-10-03
Two small trials suggest that low-dose intravenous immunoglobulin (IVIg) may improve the symptoms of complex regional pain syndrome (CRPS), a rare posttraumatic pain condition. To confirm the efficacy of low-dose IVIg compared with placebo in reducing pain during a 6-week period in adult patients who had CRPS from 1 to 5 years. 1:1 parallel, randomized, placebo-controlled, multicenter trial for 6 weeks, with an optional 6-week open extension. Patients were randomly assigned to 1 of 2 study groups between 27 August 2013 and 28 October 2015; the last patient completed follow-up on 21 March 2016. Patients, providers, researchers, and outcome assessors were blinded to treatment assignment. (ISRCTN42179756). 7 secondary and tertiary care pain management centers in the United Kingdom. 111 patients with moderate or severe CRPS of 1 to 5 years' duration. IVIg, 0.5 g/kg of body weight, or visually indistinguishable placebo of 0.1% albumin in saline on days 1 and 22 after randomization. The primary outcome was 24-hour average pain intensity, measured daily between days 6 and 42, on an 11-point (0- to 10-point) rating scale. Secondary outcomes were pain interference and quality of life. The primary analysis sample consisted of 108 eligible patients, 103 of whom had outcome data. Mean (average) pain scores were 6.9 points (SD, 1.5) for placebo and 7.2 points (SD, 1.3) for IVIg. The adjusted difference in means was 0.27 (95% CI, -0.25 to 0.80; P = 0.30), which excluded the prespecified, clinically important difference of -1.2. No statistically significant differences in secondary outcomes were found between the groups. In the open extension, 12 of the 67 patients (18%) who received 2 IVIg infusions had pain reduction of at least 2 points compared with their baseline score. Two patients in the blinded phase (1 in the placebo and 1 in the IVIg group) and 4 in the open IVIg phase had serious events. Results do not apply to patients who have had CRPS for less than 1 year or more than 5 years and do not extend to full-dose treatment (for example, 2 g/kg). The study was inadequately powered to detect subgroup effects. Low-dose immunoglobulin treatment for 6 weeks was not effective in relieving pain in patients with moderate to severe CRPS of 1 to 5 years' duration. Medical Research Council/National Institute for Health Research Efficacy and Mechanism Evaluation Program, Pain Relief Foundation, and Biotest United Kingdom.
Resistance training for hot flushes in postmenopausal women: Randomized controlled trial protocol.
Berin, Emilia; Hammar, Mats L; Lindblom, Hanna; Lindh-Åstrand, Lotta; Spetz Holm, Anna-Clara E
2016-03-01
Hot flushes and night sweats affect 75% of all women after menopause and is a common reason for decreased quality of life in mid-aged women. Hormone therapy is effective in ameliorating symptoms but cannot be used by all women due to contraindications and side effects. Engagement in regular exercise is associated with fewer hot flushes in observational studies, but aerobic exercise has not proven effective in randomized controlled trials. It remains to be determined whether resistance training is effective in reducing hot flushes and improves quality of life in symptomatic postmenopausal women. The aim of this study is to investigate the effect of standardized resistance training on hot flushes and other health parameters in postmenopausal women. This is an open, parallel-group, randomized controlled intervention study conducted in Linköping, Sweden. Sixty symptomatic and sedentary postmenopausal women with a mean of at least four moderate to severe hot flushes per day or 28 per week will be randomized to an exercise intervention or unchanged physical activity (control group). The intervention consists of 15 weeks of standardized resistance training performed three times a week under supervision of a physiotherapist. The primary outcome is hot flush frequency assessed by self-reported hot flush diaries, and the difference in change from baseline to week 15 will be compared between the intervention group and the control group. The intention is that this trial will contribute to the evidence base regarding effective treatment for hot flushes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Arslan, Zakir; Çalık, Eyup Serhat; Kaplan, Bekir; Ahiskalioglu, Elif Oral
2016-01-01
There are many studies conducted on reducing the frequency and severity of fentayl-induced cough during anesthesia induction. We propose that pheniramine maleate, an antihistaminic, may suppress this cough. We aim to observe the effect of pheniramine on fentanyl-induced cough during anesthesia induction. This is a double-blinded, prospective, three-arm parallel, randomized clinical trial of 120 patients with ASA (American Society of Anesthesiologists) physical status III and IV who aged ≥18 and scheduled for elective open heart surgery during general anesthesia. Patients were randomly assigned to three groups of 40 patients, using computer-generated random numbers: placebo group, pheniramine group, and lidocaine group. Cough incidence differed significantly between groups. In the placebo group, 37.5% of patients had cough, whereas the frequency was significantly decreased in pheniramine group (5%) and lidocaine group (15%) (Fischer exact test, p=0.0007 and p=0.0188, respectively). There was no significant change in cough incidence between pheniramine group (5%) and lidocaine group (15%) (Fischer exact test, p=0.4325). Cough severity did also change between groups. Post Hoc tests with Bonferroni showed that mean cough severity in placebo differed significantly than that of pheniramine group and lidocaine group (p<0.0001 and p=0.009, respectively). There was no significant change in cough severity between pheniramine group and lidocaine group (p=0.856). Intravenous pheniramine is as effective as lidocaine in preventing fentayl-induced cough. Our results emphasize that pheniramine is a convenient drug to decrease this cough. Copyright © 2015 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.
Accuracy of the Parallel Analysis Procedure with Polychoric Correlations
ERIC Educational Resources Information Center
Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah
2009-01-01
The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…
"They who dream by day": parallels between Openness to Experience and dreaming.
DeYoung, Colin G; Grazioplene, Rachael G
2013-12-01
Individuals high in the personality trait Openness to Experience appear to engage spontaneously (during wake) in processes of elaborative encoding similar to those Llewellyn identifies in both dreaming and the ancient art of memory (AAOM). Links between Openness and dreaming support the hypothesis that dreaming is part of a larger process of cognitive exploration that facilitates adaptation to new experiences.
Meta-analysis of laparoscopic versus open repair of perforated peptic ulcer.
Antoniou, Stavros A; Antoniou, George A; Koch, Oliver O; Pointner, Rudolph; Granderath, Frank A
2013-01-01
Laparoscopic treatment of perforated peptic ulcer (PPU) has been introduced as an alternative procedure to open surgery. It has been postulated that the minimally invasive approach involves less operative stress and results in decreased morbidity and mortality. We conducted a meta-analysis of randomized trials to test this hypothesis. Medline, EMBASE, and the Cochrane Central Register of Randomized Trials databases were searched, with no date or language restrictions. Our literature search identified 4 randomized trials, with a cumulative number of 289 patients, that compared the laparoscopic approach with open sutured repair of perforated ulcer. Analysis of outcomes did not favor either approach in terms of morbidity, mortality, and reoperation rate, although odds ratios seemed to consistently support the laparoscopic approach. Results did not determine the comparative efficiency and safety of laparoscopic or open approach for PPU. In view of an increased interest in the laparoscopic approach, further randomized trials are considered essential to determine the relative effectiveness of laparoscopic and open repair of PPU.
Meta-analysis of Laparoscopic Versus Open Repair of Perforated Peptic Ulcer
Antoniou, George A.; Koch, Oliver O.; Pointner, Rudolph; Granderath, Frank A.
2013-01-01
Background and Objectives: Laparoscopic treatment of perforated peptic ulcer (PPU) has been introduced as an alternative procedure to open surgery. It has been postulated that the minimally invasive approach involves less operative stress and results in decreased morbidity and mortality. Methods: We conducted a meta-analysis of randomized trials to test this hypothesis. Medline, EMBASE, and the Cochrane Central Register of Randomized Trials databases were searched, with no date or language restrictions. Results: Our literature search identified 4 randomized trials, with a cumulative number of 289 patients, that compared the laparoscopic approach with open sutured repair of perforated ulcer. Analysis of outcomes did not favor either approach in terms of morbidity, mortality, and reoperation rate, although odds ratios seemed to consistently support the laparoscopic approach. Results did not determine the comparative efficiency and safety of laparoscopic or open approach for PPU. Conclusion: In view of an increased interest in the laparoscopic approach, further randomized trials are considered essential to determine the relative effectiveness of laparoscopic and open repair of PPU. PMID:23743368
NASA Astrophysics Data System (ADS)
Dong, Dai; Li, Xiaoning
2015-03-01
High-pressure solenoid valve with high flow rate and high speed is a key component in an underwater driving system. However, traditional single spool pilot operated valve cannot meet the demands of both high flow rate and high speed simultaneously. A new structure for a high pressure solenoid valve is needed to meet the demand of the underwater driving system. A novel parallel-spool pilot operated high-pressure solenoid valve is proposed to overcome the drawback of the current single spool design. Mathematical models of the opening process and flow rate of the valve are established. Opening response time of the valve is subdivided into 4 parts to analyze the properties of the opening response. Corresponding formulas to solve 4 parts of the response time are derived. Key factors that influence the opening response time are analyzed. According to the mathematical model of the valve, a simulation of the opening process is carried out by MATLAB. Parameters are chosen based on theoretical analysis to design the test prototype of the new type of valve. Opening response time of the designed valve is tested by verifying response of the current in the coil and displacement of the main valve spool. The experimental results are in agreement with the simulated results, therefore the validity of the theoretical analysis is verified. Experimental opening response time of the valve is 48.3 ms at working pressure of 10 MPa. The flow capacity test shows that the largest effective area is 126 mm2 and the largest air flow rate is 2320 L/s. According to the result of the load driving test, the valve can meet the demands of the driving system. The proposed valve with parallel spools provides a new method for the design of a high-pressure valve with fast response and large flow rate.
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.
2016-01-01
The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.
Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha
2006-11-01
Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.
ERIC Educational Resources Information Center
Montague, Margariete A.
This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…
Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes
2013-01-01
Motivation Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. Results We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. Availability The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana. PMID:24564704
Wang, Yue; Goh, Wilson; Wong, Limsoon; Montana, Giovanni
2013-01-01
Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less
APPARATUS FOR PRODUCING IONS OF VAPORIZABLE MATERIALS
Starr, C.
1957-11-19
This patent relates to electronic discharge devices used as ion sources, and in particular describes an ion source for application in a calutron. The source utilizes two cathodes disposed at opposite ends of a longitudinal opening in an arc block fed with vaporized material. A magnetic field is provided parallel to the length of the arc block opening. The electrons from the cathodes are directed through slits in collimating electrodes into the arc block parallel to the magnetic field and cause an arc discharge to occur between the cathodes, as the arc block and collimating electrodes are at a positive potential with respect to the cathode. The ions are withdrawn by suitable electrodes disposed opposite the arc block opening. When such an ion source is used in a calutron, an arc discharge of increased length may be utilized, thereby increasing the efficiency and economy of operation.
Analysis of emotionality and locomotion in radio-frequency electromagnetic radiation exposed rats.
Narayanan, Sareesh Naduvil; Kumar, Raju Suresh; Paval, Jaijesh; Kedage, Vivekananda; Bhat, M Shankaranarayana; Nayak, Satheesha; Bhat, P Gopalakrishna
2013-07-01
In the current study the modulatory role of mobile phone radio-frequency electromagnetic radiation (RF-EMR) on emotionality and locomotion was evaluated in adolescent rats. Male albino Wistar rats (6-8 weeks old) were randomly assigned into the following groups having 12 animals in each group. Group I (Control): they remained in the home cage throughout the experimental period. Group II (Sham exposed): they were exposed to mobile phone in switch-off mode for 28 days, and Group III (RF-EMR exposed): they were exposed to RF-EMR (900 MHz) from an active GSM (Global system for mobile communications) mobile phone with a peak power density of 146.60 μW/cm(2) for 28 days. On 29th day, the animals were tested for emotionality and locomotion. Elevated plus maze (EPM) test revealed that, percentage of entries into the open arm, percentage of time spent on the open arm and distance travelled on the open arm were significantly reduced in the RF-EMR exposed rats. Rearing frequency and grooming frequency were also decreased in the RF-EMR exposed rats. Defecation boli count during the EPM test was more with the RF-EMR group. No statistically significant difference was found in total distance travelled, total arm entries, percentage of closed arm entries and parallelism index in the RF-EMR exposed rats compared to controls. Results indicate that mobile phone radiation could affect the emotionality of rats without affecting the general locomotion.
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric
2010-08-01
We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html
Flindall, Ian; Leff, Daniel Richard; Goodship, Jonathan; Sugden, Colin; Darzi, Ara
2016-04-01
To evaluate the impact of modafinil on "free" and "cued" recall of clinical information in fatigued but nonsleep-deprived clinicians. Despite attempts to minimize sleep deprivation through redesign of the roster of residents and staff surgeons, evidence suggests that fatigue remains prevalent. The wake-promoting agent modafinil improves cognition in the sleep-deprived fatigued state and may improve information recall in fatigued nonsleep-deprived clinicians. Twenty-four medical undergraduates participated in a double-blind, parallel, randomized controlled trial (modafinil-200 mg:placebo). Medication was allocated 2 hours before a 90-minute fatigue-inducing, continuous performance task (dual 2-back task). A case history memorization task was then performed. Clinical information recall was assessed as "free"(no cognitive aids) and "cued"(using aid memoirs). Open and closed cues represent information of increasing specificity to aid the recall of clinical information. Fatigue was measured objectively using the psychomotor vigilance task at induction, before and after the dual 2-back task. Modafinil decreased false starts and lapses (modafinil = 0.50, placebo = 9.83, P < .05) and improved psychomotor vigilance task performance (Decreased Performance, modafinil = 0.006, placebo = 0.098, P < .05). Modafinil improved free information recall (modafinil = 137.8, placebo = 106.0, P < .01). There was no significant difference between groups in the amount of information recalled with open (modafinil = 62.3, placebo = 52.8, P = .1) and closed cues (modafinil = 80.1, placebo = 75.9, P = .3). Modafinil attenuated fatigue and improved free recall of clinical information without improving cue-based recall under the design of our experimental conditions. Memory cues to aid retrieval of clinical information are convenient interventions that could decrease fatigue-related error without adverse effects of the neuropharmacology. Copyright © 2016 Elsevier Inc. All rights reserved.
Moreno, M. Llanos; Neto, Arlete; Ariceta, Gema; Vara, Julia; Alonso, Angel; Bueno, Alberto; Afonso, Alberto Caldas; Correia, António Jorge; Muley, Rafael; Barrios, Vicente; Gómez, Carlos; Argente, Jesús
2010-01-01
Background and objectives: Our aim was to evaluate the growth-promoting effect of growth hormone (GH) treatment in infants with chronic renal failure (CRF) and persistent growth retardation despite adequate nutritional and metabolic management. Design, setting, participants, & measurements: The study design included randomized, parallel groups in an open, multicenter trial comparing GH (0.33 mg/kg per wk) with nontreatment with GH during 12 months. Sixteen infants who had growth retardation, were aged 12 ± 3 months, had CRF (GFR ≤60 ml/min per 1.73 m2), and had adequate nutritional intake and good metabolic control were recruited from eight pediatric nephrology departments from Spain and Portugal. Main outcome measures were body length, body weight, bone age, biochemical and hormonal analyses, renal function, bone mass, and adverse effects. Results: Length gain in infants who were treated with GH was statistically greater (P < 0.05) than that of nontreated children (14.5 versus 9.5 cm/yr; SD score 1.43 versus −0.11). The GH-induced stimulation of growth was associated with no undesirable effects on bone maturation, renal failure progression, or metabolic control. In addition, GH treatment improved forearm bone mass and increased serum concentrations of total and free IGF-I and IGF-binding protein 3 (IGFBP-3), whereas IGF-II, IGFBP-1, IGFBP-2, GH-binding protein, ghrelin, and leptin were not modified. Conclusions: Infants with CRF and growth retardation despite good metabolic and nutritional control benefit from GH treatment without adverse effects during 12 months of therapy. PMID:20522533
Memantine and constraint-induced aphasia therapy in chronic poststroke aphasia.
Berthier, Marcelo L; Green, Cristina; Lara, J Pablo; Higueras, Carolina; Barbancho, Miguel A; Dávila, Guadalupe; Pulvermüller, Friedemann
2009-05-01
We conducted a randomized, double-blind, placebo-controlled, parallel-group study of both memantine and constraint-induced aphasia therapy (CIAT) on chronic poststroke aphasia followed by an open-label extension phase. Patients were randomized to memantine (20 mg/day) or placebo alone during 16 weeks, followed by combined drug treatment with CIAT (weeks 16-18), drug treatment alone (weeks 18-20), and washout (weeks 20-24), and finally, an open-label extension phase of memantine (weeks 24-48). After baseline evaluations, clinical assessments were done at two end points (weeks 16 and 18), and at weeks 20, 24, and 48. Outcome measures were changes in the Western Aphasia Battery-Aphasia Quotient and the Communicative Activity Log. Twenty-eight patients were included, and 27 completed both treatment phases. The memantine group showed significantly better improvement on Western Aphasia Battery-Aphasia Quotient compared with the placebo group while the drug was taken (week 16, p = 0.002; week 18, p = 0.0001; week 20, p = 0.005) and at the washout assessment (p = 0.041). A significant increase in Communicative Activity Log was found in favor of memantine-CIAT relative to placebo-CIAT (week 18, p = 0.040). CIAT treatment led to significant improvement in both groups (p = 0.001), which was even greater under additional memantine treatment (p = 0.038). Beneficial effects of memantine were maintained in the long-term follow-up evaluation, and patients who switched to memantine from placebo experienced a benefit (p = 0.02). Both memantine and CIAT alone improved aphasia severity, but best outcomes were achieved combining memantine with CIAT. Beneficial effects of memantine and CIAT persisted on long-term follow-up.
Chopra, Arvind; Chandrashekara, S; Iyer, Rajgopalan; Rajasekhar, Liza; Shetty, Naresh; Veeravalli, Sarathchandra Mouli; Ghosh, Alakendu; Merchant, Mrugank; Oak, Jyotsna; Londhey, Vikram; Barve, Abhijit; Ramakrishnan, M S; Montero, Enrique
2016-04-01
The objective of this study was to assess the safety and efficacy of itolizumab with methotrexate in active rheumatoid arthritis (RA) patients who had inadequate response to methotrexate. In this open-label, phase 2 study, 70 patients fulfilling American College of Rheumatology (ACR) criteria and negative for latent tuberculosis were randomized to four arms: 0.2, 0.4, or 0.8 mg/kg itolizumab weekly combined with oral methotrexate, and methotrexate alone (2:2:2:1). Patients were treated for 12 weeks, followed by 12 weeks of methotrexate alone during follow-up. Twelve weeks of itolizumab therapy was well tolerated. Forty-four patients reported adverse events (AEs); except for six severe AEs, all others were mild or moderate. Infusion-related reactions mainly occurred after the first infusion, and none were reported after the 11th infusion. No serum anti-itolizumab antibodies were detected. In the full analysis set, all itolizumab doses showed evidence of efficacy. At 12 weeks, 50 % of the patients achieved ACR20, and 58.3 % moderate or good 28-joint count Disease Activity Score (DAS-28) response; at week 24, these responses were seen in 22 and 31 patients. Significant improvements were seen in Short Form-36 Health Survey and Health Assessment Questionnaire Disability Index scores. Overall, itolizumab in combination with methotrexate was well tolerated and efficacious in RA for 12 weeks, with efficacy persisting for the entire 24-week evaluation period. (Clinical Trial Registry of India, http://ctri.nic.in/Clinicaltrials/login.php , CTRI/2008/091/000295).
Use Computer-Aided Tools to Parallelize Large CFD Applications
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Yan, J.
2000-01-01
Porting applications to high performance parallel computers is always a challenging task. It is time consuming and costly. With rapid progressing in hardware architectures and increasing complexity of real applications in recent years, the problem becomes even more sever. Today, scalability and high performance are mostly involving handwritten parallel programs using message-passing libraries (e.g. MPI). However, this process is very difficult and often error-prone. The recent reemergence of shared memory parallel (SMP) architectures, such as the cache coherent Non-Uniform Memory Access (ccNUMA) architecture used in the SGI Origin 2000, show good prospects for scaling beyond hundreds of processors. Programming on an SMP is simplified by working in a globally accessible address space. The user can supply compiler directives, such as OpenMP, to parallelize the code. As an industry standard for portable implementation of parallel programs for SMPs, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran, C and C++ to express shared memory parallelism. It promises an incremental path for parallel conversion of existing software, as well as scalability and performance for a complete rewrite or an entirely new development. Perhaps the main disadvantage of programming with directives is that inserted directives may not necessarily enhance performance. In the worst cases, it can create erroneous results. While vendors have provided tools to perform error-checking and profiling, automation in directive insertion is very limited and often failed on large programs, primarily due to the lack of a thorough enough data dependence analysis. To overcome the deficiency, we have developed a toolkit, CAPO, to automatically insert OpenMP directives in Fortran programs and apply certain degrees of optimization. CAPO is aimed at taking advantage of detailed inter-procedural dependence analysis provided by CAPTools, developed by the University of Greenwich, to reduce potential errors made by users. Earlier tests on NAS Benchmarks and ARC3D have demonstrated good success of this tool. In this study, we have applied CAPO to parallelize three large applications in the area of computational fluid dynamics (CFD): OVERFLOW, TLNS3D and INS3D. These codes are widely used for solving Navier-Stokes equations with complicated boundary conditions and turbulence model in multiple zones. Each one comprises of from 50K to 1,00k lines of FORTRAN77. As an example, CAPO took 77 hours to complete the data dependence analysis of OVERFLOW on a workstation (SGI, 175MHz, R10K processor). A fair amount of effort was spent on correcting false dependencies due to lack of necessary knowledge during the analysis. Even so, CAPO provides an easy way for user to interact with the parallelization process. The OpenMP version was generated within a day after the analysis was completed. Due to sequential algorithms involved, code sections in TLNS3D and INS3D need to be restructured by hand to produce more efficient parallel codes. An included figure shows preliminary test results of the generated OVERFLOW with several test cases in single zone. The MPI data points for the small test case were taken from a handcoded MPI version. As we can see, CAPO's version has achieved 18 fold speed up on 32 nodes of the SGI O2K. For the small test case, it outperformed the MPI version. These results are very encouraging, but further work is needed. For example, although CAPO attempts to place directives on the outer- most parallel loops in an interprocedural framework, it does not insert directives based on the best manual strategy. In particular, it lacks the support of parallelization at the multi-zone level. Future work will emphasize on the development of methodology to work in a multi-zone level and with a hybrid approach. Development of tools to perform more complicated code transformation is also needed.
Bridges, Eileen; Altherwi, Tawfeeq; Correa, José A; Hew-Butler, Tamara
2018-01-23
To determine whether oral administration of 3% hypertonic saline (HTS) is as efficacious as intravenous (IV) 3% saline in reversing symptoms of mild-to-moderate symptomatic exercise-associated hyponatremia (EAH) in athletes during and after a long-distance triathlon. Noninferiority, open-label, parallel-group, randomized control trial to IV or oral HTS. We used permuted block randomization with sealed envelopes, containing the word either "oral" or "IV." Annual long-distance triathlon (3.8-km swim, 180-km bike, and 42-km run) at Mont-Tremblant, Quebec, Canada. Twenty race finishers with mild to moderately symptomatic EAH. Age, sex, race finish time, and 9 clinical symptoms. Time from treatment to discharge. We successfully randomized 20 participants to receive either an oral (n = 11) or IV (n = 9) bolus of HTS. We performed venipuncture to measure serum sodium (Na) at presentation to the medical clinic and at time of symptom resolution after the intervention. The average time from treatment to discharge was 75.8 minutes (SD 29.7) for the IV treatment group and 50.3 minutes (SD 26.8) for the oral treatment group (t test, P = 0.02). Serum Na before and after treatment was not significantly different in both groups. There was no difference on presentation between groups in age, sex, or race finish time, both groups presented with an average of 6 symptoms. Oral HTS is effective in reversing symptoms of mild-to-moderate hyponatremia in EAH.
Eskilsson, Therese; Slunga Järvholm, Lisbeth; Malmberg Gavelin, Hanna; Stigsdotter Neely, Anna; Boraxbekk, Carl-Johan
2017-09-02
Patients with stress-related exhaustion suffer from cognitive impairments, which often remain after psychological treatment or work place interventions. It is important to find effective treatments that can address this problem. Therefore, the aim of this study was to investigate the effects on cognitive performance and psychological variables of a 12-week aerobic training program performed at a moderate-vigorous intensity for patients with exhaustion disorder who participated in a multimodal rehabilitation program. In this open-label, parallel, randomized and controlled trial, 88 patients diagnosed with exhaustion disorder participated in a 24-week multimodal rehabilitation program. After 12 weeks in the program the patients were randomized to either a 12-week aerobic training intervention or to a control group with no additional training. Primary outcome measure was cognitive function, and secondary outcome measures were psychological health variables and aerobic capacity. In total, 51% patients in the aerobic training group and 78% patients in the control group completed the intervention period. The aerobic training group significantly improved in maximal oxygen uptake and episodic memory performance. No additional improvement in burnout, depression or anxiety was observed in the aerobic group compared with controls. Aerobic training at a moderate-vigorous intensity within a multimodal rehabilitation program for patients with exhaustion disorder facilitated episodic memory. A future challenge would be the clinical implementation of aerobic training and methods to increase feasibility in this patient group. ClinicalTrials.gov: NCT03073772 . Retrospectively registered 21 February 2017.
Onoue, Takeshi; Goto, Motomitsu; Kobayashi, Tomoko; Tominaga, Takashi; Ando, Masahiko; Honda, Hiroyuki; Yoshida, Yasuko; Tosaki, Takahiro; Yokoi, Hisashi; Kato, Sawako; Maruyama, Shoichi; Arima, Hiroshi
2017-08-01
The Internet of Things (IoT) allows collecting vast amounts of health-relevant data such as daily activity, body weight (BW), and blood pressure (BP) automatically. The use of IoT devices to monitor diabetic patients has been studied, but could not evaluate IoT-dependent effects because health data were not measured in control groups. This multicenter, open-label, randomized, parallel group study will compare the impact of intensive health guidance using IoT and conventional medical guidance on glucose control. It will be conducted in outpatients with type 2 diabetes for a period of 6 months. IoT devices to measure amount of daily activity, BW, and BP will be provided to IoT group patients. Healthcare professionals (HCPs) will provide appropriate feedback according to the data. Non-IoT control, patients will be given measurement devices that do not have a feedback function. The primary outcome is glycated hemoglobin at 6 months. The study has already enrolled 101 patients, 50 in the IoT group and 51 in the non-IoT group, at the two participating outpatient clinics. The baseline characteristics of two groups did not differ, except for triglycerides. This will be the first randomized, controlled study to evaluate IoT-dependent effects of intensive feedback from HCPs. The results will validate a new method of health-data collection and provision of feedback suitable for diabetes support with increased effectiveness and low cost.
A parallel program for numerical simulation of discrete fracture network and groundwater flow
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Liou, Tai-Sheng; Kalatehjari, Roohollah
2017-04-01
The ability of modeling fluid flow in Discrete Fracture Network (DFN) is critical to various applications such as exploration of reserves in geothermal and petroleum reservoirs, geological sequestration of carbon dioxide and final disposal of spent nuclear fuels. Although several commerical or acdametic DFN flow simulators are already available (e.g., FracMan and DFNWORKS), challenges in terms of computational efficiency and three-dimensional visualization still remain, which therefore motivates this study for developing a new DFN and flow simulator. A new DFN and flow simulator, DFNbox, was written in C++ under a cross-platform software development framework provided by Qt. DFNBox integrates the following capabilities into a user-friendly drop-down menu interface: DFN simulation and clipping, 3D mesh generation, fracture data analysis, connectivity analysis, flow path analysis and steady-state grounwater flow simulation. All three-dimensional visualization graphics were developed using the free OpenGL API. Similar to other DFN simulators, fractures are conceptualized as random point process in space, with stochastic characteristics represented by orientation, size, transmissivity and aperture. Fracture meshing was implemented by Delaunay triangulation for visualization but not flow simulation purposes. Boundary element method was used for flow simulations such that only unknown head or flux along exterior and interection bounaries are needed for solving the flow field in the DFN. Parallel compuation concept was taken into account in developing DFNbox for calculations that such concept is possible. For example, the time-consuming seqential code for fracture clipping calculations has been completely replaced by a highly efficient parallel one. This can greatly enhance compuational efficiency especially on multi-thread platforms. Furthermore, DFNbox have been successfully tested in Windows and Linux systems with equally-well performance.
Parallel, Distributed Scripting with Python
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P J
2002-05-24
Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI librarymore » gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.« less
Matsuoka, Yutaka; Nishi, Daisuke; Yonemoto, Naohiro; Hamazaki, Kei; Matsumura, Kenta; Noguchi, Hiroko; Hashimoto, Kenji; Hamazaki, Tomohito
2013-01-05
Preclinical and clinical studies suggest that supplementation with omega-3 fatty acids after trauma might reduce subsequent posttraumatic stress disorder (PTSD). To date, we have shown in an open trial that PTSD symptoms in critically injured patients can be reduced by taking omega-3 fatty acids, hypothesized to stimulate hippocampal neurogenesis. The primary aim of the present randomized controlled trial is to examine the efficacy of omega-3 fatty acid supplementation in the secondary prevention of PTSD following accidental injury, as compared with placebo. This paper describes the rationale and protocol of this trial. The Tachikawa Project for Prevention of Posttraumatic Stress Disorder with Polyunsaturated Fatty Acid (TPOP) is a double-blinded, parallel group, randomized controlled trial to assess whether omega-3 fatty acid supplementation can prevent PTSD symptoms among accident-injured patients consecutively admitted to an intensive care unit. We plan to recruit accident-injured patients and follow them prospectively for 12 weeks. Enrolled patients will be randomized to either the omega-3 fatty acid supplement group (1,470 mg docosahexaenoic acid and 147 mg eicosapentaenoic acid daily) or placebo group. Primary outcome is score on the Clinician-Administered PTSD Scale (CAPS). We will need to randomize 140 injured patients to have 90% power to detect a 10-point difference in mean CAPS scores with omega-3 fatty acid supplementation compared with placebo. Secondary measures are diagnosis of PTSD and major depressive disorder, depressive symptoms, physiologic response in the experiment using script-driven imagery and acoustic stimulation, serum brain-derived neurotrophic factor, health-related quality of life, resilience, and aggression. Analyses will be by intent to treat. The trial was initiated on December 13 2008, with 104 subjects randomized by November 30 2012. This study promises to be the first trial to provide a novel prevention strategy for PTSD among traumatized people. ClinicalTrials.gov Identifier NCT00671099.
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J
2004-09-01
We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.
The energy density distribution of an ideal gas and Bernoulli’s equations
NASA Astrophysics Data System (ADS)
Santos, Leonardo S. F.
2018-05-01
This work discusses the energy density distribution in an ideal gas and the consequences of Bernoulli’s equation and the corresponding relation for compressible fluids. The aim of this work is to study how Bernoulli’s equation determines the energy flow in a fluid, although Bernoulli’s equation does not describe the energy density itself. The model from molecular dynamic considerations that describes an ideal gas at rest with uniform density is modified to explore the gas in motion with non-uniform density and gravitational effects. The difference between the component of the speed of a particle that is parallel to the gas speed and the gas speed itself is called ‘parallel random speed’. The pressure from the ‘parallel random speed’ is denominated as parallel pressure. The modified model predicts that the energy density is the sum of kinetic and potential gravitational energy densities plus two terms with static and parallel pressures. The application of Bernoulli’s equation and the corresponding relation for compressible fluids in the energy density expression has resulted in two new formulations. For incompressible and compressible gas, the energy density expressions are written as a function of stagnation, static and parallel pressures, without any dependence on kinetic or gravitational potential energy densities. These expressions of the energy density are the main contributions of this work. When the parallel pressure was uniform, the energy density distribution for incompressible approximation and compressible gas did not converge to zero for the limit of null static pressure. This result is rather unusual because the temperature tends to zero for null pressure. When the gas was considered incompressible and the parallel pressure was equal to static pressure, the energy density maintained this unusual behaviour with small pressures. If the parallel pressure was equal to static pressure, the energy density converged to zero for the limit of the null pressure only if the gas was compressible. Only the last situation describes an intuitive behaviour for an ideal gas.
Event parallelism: Distributed memory parallel computing for high energy physics experiments
NASA Astrophysics Data System (ADS)
Nash, Thomas
1989-12-01
This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2017-01-01
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Flat connections in open string mirror symmetry
NASA Astrophysics Data System (ADS)
Alim, Murad; Hecht, Michael; Jockers, Hans; Mayr, Peter; Mertens, Adrian; Soroush, Masoud
2012-06-01
We study a flat connection defined on the open-closed deformation space of open string mirror symmetry for type II compactifications on Calabi-Yau threefolds with D-branes. We use flatness and integrability conditions to define distinguished flat coordinates and the superpotential function at an arbitrary point in the open-closed deformation space. Integrability conditions are given for concrete deformation spaces with several closed and open string deformations. We study explicit examples for expansions around different limit points, including orbifold Gromov-Witten invariants, and brane configurations with several brane moduli. In particular, the latter case covers stacks of parallel branes with non-Abelian symmetry.
1980-10-31
and is initiated at the periphery of the de- vice at opening in the SijNj layer. Rate measurement* of thi* prove** made on the GKOUSS imager using...dimensions, single-mode opera- tion can be obtained. There is a stripe opening in the oxide film running parallel to the etched rib, which can be...seen in cross section in Fig. I-l(a). This stripe opening is the nucleation region for the epitaxial growth. Other oxide-confined waveguide
Furumura, Minao; Sato, Noriko; Kusaba, Nobutaka; Takagaki, Kinya; Nakayama, Juichiro
2012-01-01
French maritime pine bark extract (PBE) has gained popularity as a dietary supplement in the treatment of various diseases due to its polyphenol-rich ingredients. Oligometric proanthocyanidins (OPCs), a class of bioflavonoid complexes, are enriched in French maritime PBE and have antioxidant and anti-inflammatory activity. Previous studies have suggested that French maritime PBE helps reduce ultraviolet radiation damage to the skin and may protect human facial skin from symptoms of photoaging. To evaluate the clinical efficacy of French maritime PBE in the improvement of photodamaged facial skin, we conducted a randomized trial of oral supplementation with PBE. One hundred and twelve women with mild to moderate photoaging of the skin were randomized to either a 12-week open trial regimen of 100 mg PBE supplementation once daily or to a parallel-group trial regimen of 40 mg PBE supplementation once daily. A significant decrease in clinical grading of skin photoaging scores was observed in both time courses of 100 mg daily and 40 mg daily PBE supplementation regimens. A significant reduction in the pigmentation of age spots was also demonstrated utilizing skin color measurements. Clinically significant improvement in photodamaged skin could be achieved with PBE. Our findings confirm the efficacy and safety of PBE.
Haziza, Christelle; de La Bourdonnaye, Guillaume; Skiada, Dimitra; Ancerewicz, Jacek; Baker, Gizelle; Picavet, Patrick; Lüdicke, Frank
2016-11-30
The Tobacco Heating System (THS) 2.2, a candidate Modified Risk Tobacco Product (MRTP), is designed to heat tobacco without burning it. Tobacco is heated in order to reduce the formation of harmful and potentially harmful constituents (HPHC), and reduce the consequent exposure, compared with combustible cigarettes (CC). In this 5-day exposure, controlled, parallel-group, open-label clinical study, 160 smoking, healthy subjects were randomized to three groups and asked to: (1) switch from CCs to THS 2.2 (THS group; 80 participants); (2) continue to use their own non-menthol CC brand (CC group; 41 participants); or (3) to refrain from smoking (smoking abstinence (SA) group; 39 participants). Biomarkers of exposure, except those associated with nicotine exposure, were significantly reduced in the THS group compared with the CC group, and approached the levels observed in the SA group. Increased product consumption and total puff volume were reported in the THS group. However, exposure to nicotine was similar to CC at the end of the confinement period. Reduction in urge-to-smoke was comparable between the THS and CC groups and THS 2.2 product was well tolerated. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
NASA Astrophysics Data System (ADS)
Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo
2012-02-01
We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.
Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data
ERIC Educational Resources Information Center
Dinno, Alexis
2009-01-01
Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard
2017-03-01
Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Lee, Sang Ki; Kim, Kap Jung; Park, Kyung Hoon; Choy, Won Sik
2014-10-01
With the continuing improvements in implants for distal humerus fractures, it is expected that newer types of plates, which are anatomically precontoured, thinner and less irritating to soft tissue, would have comparable outcomes when used in a clinical study. The purpose of this study was to compare the clinical and radiographic outcomes in patients with distal humerus fractures who were treated with orthogonal and parallel plating methods using precontoured distal humerus plates. Sixty-seven patients with a mean age of 55.4 years (range 22-90 years) were included in this prospective study. The subjects were randomly assigned to receive 1 of 2 treatments: orthogonal or parallel plating. The following results were assessed: operating time, time to fracture union, presence of a step or gap at the articular margin, varus-valgus angulation, functional recovery, and complications. No intergroup differences were observed based on radiological and clinical results between the groups. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes, mean operation time, union time, or complication rates. There were no cases of fracture nonunion in either group; heterotrophic ossification was found 3 patients in orthogonal plating group and 2 patients in parallel plating group. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes or complication rates. However, orthogonal plating method may be preferred in cases of coronal shear fractures, where posterior to anterior fixation may provide additional stability to the intraarticular fractures. Additionally, parallel plating method may be the preferred technique used for fractures that occur at the most distal end of the humerus.
Sarli, L; Iusco, D R; Sansebastiano, G; Costi, R
2001-08-01
No randomized trial exists that specifically addresses the issue of laparoscopic bilateral inguinal hernia repair. The purpose of the present prospective, randomized, controlled, clinical study was to assess short- and long-term results when comparing simultaneous bilateral hernia repair by an open, tension-free anterior approach with laparoscopic "bikini mesh" posterior repair. Forty-three low-risk male patients with bilateral primary inguinal hernia were randomly assigned to undergo either laparoscopic preperitoneal "bikini mesh" hernia repair (TAPP) or open Lichtenstein hernioplasty. There was no difference in operating time between the two groups. The mean cost of laparoscopic hernioplasty was higher (P < 0.001). The intensity of postoperative pain was greater in the open hernia repair group at 24 hours, 48 hours, and 7 days after surgery (P < 0.001), with a greater consumption of pain medication among these patients (P < 0.05). The median time to return to work was 30 days for the open hernia repair group and 16 days for the laparoscopic "bikini mesh" repair group (P < 0.05). Only 1 asymptomatic recurrence (4.3%) was discovered in the open group. The laparoscopic approach to bilateral hernia with "bikini mesh" appears to be preferable to the open Lichtenstein tension-free hernioplasty in terms of the postoperative quality of life and interruption of occupational activity.
Battelino, Tadej; Nimri, Revital; Dovc, Klemen; Phillip, Moshe; Bratina, Natasa
2017-06-01
To investigate whether predictive low glucose management (PLGM) of the MiniMed 640G system significantly reduces the rate of hypoglycemia compared with the sensor-augmented insulin pump in children with type 1 diabetes. This randomized, two-arm, parallel, controlled, two-center open-label study included 100 children and adolescents with type 1 diabetes and glycated hemoglobin A 1c ≤10% (≤86 mmol/mol) and using continuous subcutaneous insulin infusion. Patients were randomly assigned to either an intervention group with PLGM features enabled (PLGM ON) or a control group (PLGM OFF), in a 1:1 ratio, all using the same type of sensor-augmented insulin pump. The primary end point was the number of hypoglycemic events below 65 mg/dL (3.6 mmol/L), based on sensor glucose readings, during a 14-day study treatment. The analysis was performed by intention to treat for all randomized patients. The number of hypoglycemic events below 65 mg/dL (3.6 mmol/L) was significantly smaller in the PLGM ON compared with the PLGM OFF group (mean ± SD 4.4 ± 4.5 and 7.4 ± 6.3, respectively; P = 0.008). This was also true when calculated separately for night ( P = 0.025) and day ( P = 0.022). No severe hypoglycemic events occurred; however, there was a significant increase in time spent above 140 mg/dL (7.8 mmol/L) in the PLGM ON group ( P = 0.0165). The PLGM insulin suspension was associated with a significantly reduced number of hypoglycemic events. Although this was achieved at the expense of increased time in moderate hyperglycemia, there were no serious adverse effects in young patients with type 1 diabetes. © 2017 by the American Diabetes Association.
Cree, Bruce A C; Arnold, Douglas L; Cascione, Mark; Fox, Edward J; Williams, Ian M; Meng, Xiangyi; Schofield, Lesley; Tenenbaum, Nadia
2018-01-01
In relapsing-remitting multiple sclerosis (RRMS), suboptimal adherence to injectable disease-modifying therapies (iDMTs; interferon β-1a/b, glatiramer acetate) is common, reducing their effectiveness. Patient retention on oral fingolimod and iDMTs was evaluated in PREFER MS , a randomized, parallel-group, active-controlled, open-label, 48-week study. Patients were included if they had RRMS, were aged 18-65 years and had Expanded Disability Status Scale score up to 6, enrolled at 117 US study sites, were treatment naïve or had received only one iDMT class. Patients were randomized 1:1 (fingolimod 0.5 mg/day; preselected iDMT) by interactive voice-and-web-response system without blinding, followed up quarterly, and allowed one study-approved treatment switch after 12 weeks, or earlier for efficacy or safety reasons. The primary outcome was patient retention on randomized treatment over 48 weeks. Secondary endpoints included patient-reported outcomes, brain volume loss (BVL), and cognitive function. Analysis of 433/436 patients receiving fingolimod and 428/439 receiving iDMTs showed that patient retention rate was significantly higher with fingolimod than with iDMTs [352 (81.3%) versus 125 (29.2%); 95% confidence interval 46.4-57.8%; p < 0.0001]. The most common treatment switch was from iDMT to fingolimod for injection-related reasons. Patient satisfaction was greater and BVL less with fingolimod than with iDMTs, with no difference in cognitive function. Adverse events were consistent with established tolerability profiles for each treatment. In RRMS, fingolimod was associated with better treatment retention, patient satisfaction and BVL outcomes than iDMTs. Patients may persist with iDMTs, but many may switch treatment if permitted. Treatment satisfaction fosters adherence, a prerequisite for optimal outcomes.
Suzuki, Kazuyuki; Endo, Ryujin; Takikawa, Yasuhiro; Moriyasu, Fuminori; Aoyagi, Yutaka; Moriwaki, Hisataka; Terai, Shuji; Sakaida, Isao; Sakai, Yoshiyuki; Nishiguchi, Shuhei; Ishikawa, Toru; Takagi, Hitoshi; Naganuma, Atsushi; Genda, Takuya; Ichida, Takafumi; Takaguchi, Koichi; Miyazawa, Katsuhiko; Okita, Kiwamu
2018-05-01
The efficacy and safety of rifaximin in the treatment of hepatic encephalopathy (HE) are widely known, but they have not been confirmed in Japanese patients with HE. Thus, two prospective, randomized studies (a phase II/III study and a phase III study) were carried out. Subjects with grade I or II HE and hyperammonemia were enrolled. The phase II/III study, which was a randomized, evaluator-blinded, active-comparator, parallel-group study, was undertaken at 37 institutions in Japan. Treatment periods were 14 days. Eligible patients were randomized to the rifaximin group (1200 mg/day) or the lactitol group (18-36 g/day). The phase III study was carried out in the same patients previously enrolled in the phase II/III study, and they were all treated with rifaximin (1200 mg/day) for 10 weeks. In the phase II/III study, 172 patients were enrolled. Blood ammonia (B-NH 3 ) concentration was significantly improved in the rifaximin group, but the difference between the two groups was not significant. The portal systemic encephalopathy index (PSE index), including HE grade, was significantly improved in both groups. In the phase III study, 87.3% of enrolled patients completed the treatment. The improved B-NH 3 concentration and PSE index were well maintained from the phase II/III study during the treatment period of the phase III study. Adverse drug reactions (ADRs) were seen in 13.4% of patients who received rifaximin, but there were no severe ADRs leading to death. The efficacy of rifaximin is sufficient and treatment is well tolerated in Japanese patients with HE and hyperammonemia. © 2017 The Japan Society of Hepatology.
Extended treatment for cigarette smoking cessation: a randomized control trial.
Laude, Jennifer R; Bailey, Steffani R; Crew, Erin; Varady, Ann; Lembke, Anna; McFall, Danielle; Jeon, Anna; Killen, Diana; Killen, Joel D; David, Sean P
2017-08-01
To test the potential benefit of extending cognitive-behavioral therapy (CBT) relative to not extending CBT on long-term abstinence from smoking. Two-group parallel randomized controlled trial. Patients were randomized to receive non-extended CBT (n = 111) or extended CBT (n = 112) following a 26-week open-label treatment. Community clinic in the United States. A total of 219 smokers (mean age: 43 years; mean cigarettes/day: 18). All participants received 10 weeks of combined CBT + bupropion sustained release (bupropion SR) + nicotine patch and were continued on CBT and either no medications if abstinent, continued bupropion + nicotine replacement therapy (NRT) if increased craving or depression scores, or varenicline if still smoking at 10 weeks. Half the participants were randomized at 26 weeks to extended CBT (E-CBT) to week 48 and half to non-extended CBT (no additional CBT sessions). The primary outcome was expired CO-confirmed, 7-day point-prevalence (PP) at 52- and 104-week follow-up. Analyses were based on intention-to-treat. PP abstinence rates at the 52-week follow-up were comparable across non-extended CBT (40%) and E-CBT (39%) groups [odds ratio (OR) = 0.99; 95% confidence interval (CI) = 0.55, 1.78]. A similar pattern was observed across non-extended CBT (39%) and E-CBT (33%) groups at the 104-week follow-up (OR = 0.79; 95% CI= 0.44, 1.40). Prolonging cognitive-behavioral therapy from 26 to 48 weeks does not appear to improve long-term abstinence from smoking. © 2017 Society for the Study of Addiction.
Lemesle, Gilles; Laine, Marc; Pankert, Mathieu; Puymirat, Etienne; Cuisset, Thomas; Boueri, Ziad; Maillard, Luc; Armero, Sébastien; Cayla, Guillaume; Bali, Laurent; Motreff, Pascal; Peyre, Jean-Pascal; Paganelli, Franck; Kerbaul, François; Roch, Antoine; Michelet, Pierre; Baumstarck, Karine; Bonello, Laurent
2018-01-01
According to recent literature, pretreatment with a P2Y 12 ADP receptor antagonist before coronary angiography appears no longer suitable in non-ST-segment elevation acute coronary syndrome (NSTE-ACS) due to an unfavorable risk-benefit ratio. Optimal delay of the invasive strategy in this specific context is unknown. We hypothesize that without P2Y 12 ADP receptor antagonist pretreatment, a very early invasive strategy may be beneficial. The EARLY trial (Early or Delayed Revascularization for Intermediate- and High-Risk Non-ST-Segment Elevation Acute Coronary Syndromes?) is a prospective, multicenter, randomized, controlled, open-label, 2-parallel-group study that plans to enroll 740 patients. Patients are eligible if the diagnosis of intermediate- or high-risk NSTE-ACS is made and an invasive strategy intended. Patients are randomized in a 1:1 ratio. In the control group, a delayed strategy is adopted, with the coronary angiography taking place between 12 and 72 hours after randomization. In the experimental group, a very early invasive strategy is performed within 2 hours. A loading dose of a P2Y 12 ADP receptor antagonist is given at the time of intervention in both groups. Recruitment began in September 2016 (n = 558 patients as of October 2017). The primary endpoint is the composite of cardiovascular death and recurrent ischemic events at 1 month. The EARLY trial aims to demonstrate the superiority of a very early invasive strategy compared with a delayed strategy in intermediate- and high-risk NSTE-ACS patients managed without P2Y 12 ADP receptor antagonist pretreatment. © 2018 Wiley Periodicals, Inc.
Harada, Tasuku; Kosaka, Saori; Elliesen, Joerg; Yasuda, Masanobu; Ito, Makoto; Momoeda, Mikio
2017-11-01
To investigate the efficacy and safety of ethinylestradiol 20 μg/drospirenone 3 mg in a flexible extended regimen (Flexible MIB ) compared with placebo to treat endometriosis-associated pelvic pain (EAPP). A phase 3, randomized, double-blind, placebo-controlled, parallel-group study, consisting of a 24-week double-blind treatment phase followed by a 28-week open-label extension phase with an unblinded reference arm. Thirty-two centers. A total of 312 patients with endometriosis. Patients were randomized to Flexible MIB , placebo, or dienogest. The Flexible MIB and placebo arms received 1 tablet per day continuously for 120 days, with a 4-day tablet-free interval either after 120 days or after ≥3 consecutive days of spotting and/or bleeding on days 25-120. After 24 weeks, placebo recipients were changed to Flexible MIB . Patients randomized to dienogest received 2 mg/d for 52 weeks in an unblinded reference arm. Absolute change in the most severe EAPP based on visual analog scale scores from the baseline observation phase to the end of the double-blind treatment phase. Compared with placebo, Flexible MIB significantly reduced the most severe EAPP (mean difference in visual analog scale score: -26.3 mm). Flexible MIB also improved other endometriosis-associated pain and gynecologic findings and reduced the size of endometriomas. Flexible MIB improved EAPP and was well tolerated, suggesting it may be a new alternative for managing endometriosis. NCT01697111. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Quitadamo, Paolo; Coccorullo, Paola; Giannetti, Eleonora; Romano, Claudio; Chiaro, Andrea; Campanozzi, Angelo; Poli, Emanuela; Cucchiara, Salvatore; Di Nardo, Giovanni; Staiano, Annamaria
2012-10-01
To compare the effectiveness of a mixture of acacia fiber, psyllium fiber, and fructose (AFPFF) with polyethylene glycol 3350 combined with electrolytes (PEG+E) in the treatment of children with chronic functional constipation (CFC); and to evaluate the safety and effectiveness of AFPFF in the treatment of children with CFC. This was a randomized, open label, prospective, controlled, parallel-group study involving 100 children (M/F: 38/62; mean age ± SD: 6.5 ± 2.7 years) who were diagnosed with CFC according to the Rome III Criteria. Children were randomly divided into 2 groups: 50 children received AFPFF (16.8 g daily) and 50 children received PEG+E (0.5 g/kg daily) for 8 weeks. Primary outcome measures were frequency of bowel movements, stool consistency, fecal incontinence, and improvement of other associated gastrointestinal symptoms. Safety was assessed with evaluation of clinical adverse effects and growth measurements. Compliance rates were 72% for AFPFF and 96% for PEG+E. A significant improvement of constipation was seen in both groups. After 8 weeks, 77.8% of children treated with AFPFF and 83% of children treated with PEG+E had improved (P = .788). Neither PEG+E nor AFPFF caused any clinically significant side effects during the entire course of the study period. In this randomized study, we did not find any significant difference between the efficacy of AFPFF and PEG+E in the treatment of children with CFC. Both medications were proved to be safe for CFC treatment, but PEG+E was better accepted by children. Copyright © 2012 Mosby, Inc. All rights reserved.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C
2009-01-01
Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.
NASA Astrophysics Data System (ADS)
Colas, Laurent; Lu, Ling-Feng; Křivská, Alena; Jacquot, Jonathan; Hillairet, Julien; Helou, Walid; Goniche, Marc; Heuraux, Stéphane; Faudot, Eric
2017-02-01
We investigate theoretically how sheath radio-frequency (RF) oscillations relate to the spatial structure of the near RF parallel electric field E ∥ emitted by ion cyclotron (IC) wave launchers. We use a simple model of slow wave (SW) evanescence coupled with direct current (DC) plasma biasing via sheath boundary conditions in a 3D parallelepiped filled with homogeneous cold magnetized plasma. Within a ‘wide-sheath’ asymptotic regime, valid for large-amplitude near RF fields, the RF part of this simple RF + DC model becomes linear: the sheath oscillating voltage V RF at open field line boundaries can be re-expressed as a linear combination of individual contributions by every emitting point in the input field map. SW evanescence makes individual contributions all the larger as the wave emission point is located closer to the sheath walls. The decay of |V RF| with the emission point/sheath poloidal distance involves the transverse SW evanescence length and the radial protrusion depth of lateral boundaries. The decay of |V RF| with the emitter/sheath parallel distance is quantified as a function of the parallel SW evanescence length and the parallel connection length of open magnetic field lines. For realistic geometries and target SOL plasmas, poloidal decay occurs over a few centimeters. Typical parallel decay lengths for |V RF| are found to be smaller than IC antenna parallel extension. Oscillating sheath voltages at IC antenna side limiters are therefore mainly sensitive to E ∥ emission by active or passive conducting elements near these limiters, as suggested by recent experimental observations. Parallel proximity effects could also explain why sheath oscillations persist with antisymmetric strap toroidal phasing, despite the parallel antisymmetry of the radiated field map. They could finally justify current attempts at reducing the RF fields induced near antenna boxes to attenuate sheath oscillations in their vicinity.
A feasibility study on porting the community land model onto accelerators using OpenACC
Wang, Dali; Wu, Wei; Winkler, Frank; ...
2014-01-01
As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less
The Particle Accelerator Simulation Code PyORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less
PRESTO-Tango as an open-source resource for interrogation of the druggable human GPCRome.
Kroeze, Wesley K; Sassano, Maria F; Huang, Xi-Ping; Lansu, Katherine; McCorvy, John D; Giguère, Patrick M; Sciaky, Noah; Roth, Bryan L
2015-05-01
G protein-coupled receptors (GPCRs) are essential mediators of cellular signaling and are important targets of drug action. Of the approximately 350 nonolfactory human GPCRs, more than 100 are still considered to be 'orphans' because their endogenous ligands remain unknown. Here, we describe a unique open-source resource that allows interrogation of the druggable human GPCRome via a G protein-independent β-arrestin-recruitment assay. We validate this unique platform at more than 120 nonorphan human GPCR targets, demonstrate its utility for discovering new ligands for orphan human GPCRs and describe a method (parallel receptorome expression and screening via transcriptional output, with transcriptional activation following arrestin translocation (PRESTO-Tango)) for the simultaneous and parallel interrogation of the entire human nonolfactory GPCRome.
Learning and Best Practices for Learning in Open-Source Software Communities
ERIC Educational Resources Information Center
Singh, Vandana; Holt, Lila
2013-01-01
This research is about participants who use open-source software (OSS) discussion forums for learning. Learning in online communities of education as well as non-education-related online communities has been studied under the lens of social learning theory and situated learning for a long time. In this research, we draw parallels among these two…
Biederman, Joseph; Petty, Carter R; Woodworth, K Yvonne; Lomedico, Alexandra; O'Connor, Katherine B; Wozniak, Janet; Faraone, Stephen V
2012-03-01
To examine the informativeness of open-label trials toward predicting results in subsequent randomized, placebo-controlled clinical trials of psychopharmacologic treatments for pediatric bipolar disorder. We searched journal articles through PubMed at the National Library of Medicine using bipolar disorder, mania, pharmacotherapy, treatment and clinical trial as keywords. This search was supplemented with scientific presentations at national and international scientific meetings and submitted manuscripts from our group. Selection criteria included (1) enrollment of children diagnosed with DSM-IV bipolar disorder; (2) prospective assessment of at least 3 weeks; (3) monotherapy of a pharmacologic treatment for bipolar disorder; (4) use of a randomized placebo-controlled design or an open-label design for the same therapeutic compound; and (5) repeated use of the Young Mania Rating Scale (YMRS) as an outcome. The following information and data were extracted from 14 studies: study design, name of medication, class of medication, dose of medication, sample size, age, sex, trial length, and YMRS mean and standard deviation baseline and follow-up scores. For both study designs, the pooled effect size was statistically significant (open-label studies, z = 8.88, P < .001; randomized placebo-controlled studies, z = 13.75, P < .001), indicating a reduction in the YMRS from baseline to endpoint in both study designs. In a meta-analysis regression, study design was not a significant predictor of mean change in the YMRS. We found similarities in the treatment effects between open-label and randomized placebo-controlled studies in youth with bipolar disorder indicating that open-label studies are useful predictors of the potential safety and efficacy of a given compound in the treatment of pediatric bipolar disorder. © Copyright 2012 Physicians Postgraduate Press, Inc.
Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing
NASA Astrophysics Data System (ADS)
Chen, Zhi-Jun; Gao, Feng; Pan, Yang
2017-09-01
Current door-opening methods are mainly developed on tracked, wheeled and biped robots by applying multi-DOF manipulators and vision systems. However, door-opening methods for six-legged robots are seldom studied, especially using 0-DOF tools to operate and only force sensing to detect. A novel door-opening method for six-legged robots is developed and implemented to the six-parallel-legged robot. The kinematic model of the six-parallel-legged robot is established and the model of measuring the positional relationship between the robot and the door is proposed. The measurement model is completely based on only force sensing. The real-time trajectory planning method and the control strategy are designed. The trajectory planning method allows the maximum angle between the sagittal axis of the robot body and the normal line of the door plane to be 45º. A 0-DOF tool mounted to the robot body is applied to operate. By integrating with the body, the tool has 6 DOFs and enough workspace to operate. The loose grasp achieved by the tool helps release the inner force in the tool. Experiments are carried out to validate the method. The results show that the method is effective and robust in opening doors wider than 1 m. This paper proposes a novel door-opening method for six-legged robots, which notably uses a 0-DOF tool and only force sensing to detect and open the door.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana
2016-01-01
It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (−500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO2max) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η2 = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η2 = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η2 = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η2 = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program. PMID:27792174
Performance of OVERFLOW-D Applications based on Hybrid and MPI Paradigms on IBM Power4 System
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biegel, Bryan (Technical Monitor)
2002-01-01
This report briefly discusses our preliminary performance experiments with parallel versions of OVERFLOW-D applications. These applications are based on MPI and hybrid paradigms on the IBM Power4 system here at the NAS Division. This work is part of an effort to determine the suitability of the system and its parallel libraries (MPI/OpenMP) for specific scientific computing objectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard; Voter, Arthur; Uberuaga, Bla
2017-10-23
The SpecTAD software represents a refactoring of the Temperature Accelerated Dynamics (TAD2) code authored by Arthur F. Voter and Blas P. Uberuaga (LA-CC-02-05). SpecTAD extends the capabilities of TAD2, by providing algorithms for both temporal and spatial parallelism. The novel algorithms for temporal parallelism include both speculation and replication based techniques. SpecTAD also offers the optional capability to dynamically link to the open-source LAMMPS package.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Base drive for paralleled inverter systems
NASA Technical Reports Server (NTRS)
Nagano, S. (Inventor)
1980-01-01
In a paralleled inverter system, a positive feedback current derived from the total current from all of the modules of the inverter system is applied to the base drive of each of the power transistors of all modules, thereby to provide all modules protection against open or short circuit faults occurring in any of the modules, and force equal current sharing among the modules during turn on of the power transistors.
NASA Astrophysics Data System (ADS)
Cho, In Ho
For the last few decades, we have obtained tremendous insight into underlying microscopic mechanisms of degrading quasi-brittle materials from persistent and near-saintly efforts in laboratories, and at the same time we have seen unprecedented evolution in computational technology such as massively parallel computers. Thus, time is ripe to embark on a novel approach to settle unanswered questions, especially for the earthquake engineering community, by harmoniously combining the microphysics mechanisms with advanced parallel computing technology. To begin with, it should be stressed that we placed a great deal of emphasis on preserving clear meaning and physical counterparts of all the microscopic material models proposed herein, since it is directly tied to the belief that by doing so, the more physical mechanisms we incorporate, the better prediction we can obtain. We departed from reviewing representative microscopic analysis methodologies, selecting out "fixed-type" multidirectional smeared crack model as the base framework for nonlinear quasi-brittle materials, since it is widely believed to best retain the physical nature of actual cracks. Microscopic stress functions are proposed by integrating well-received existing models to update normal stresses on the crack surfaces (three orthogonal surfaces are allowed to initiate herein) under cyclic loading. Unlike the normal stress update, special attention had to be paid to the shear stress update on the crack surfaces, due primarily to the well-known pathological nature of the fixed-type smeared crack model---spurious large stress transfer over the open crack under nonproportional loading. In hopes of exploiting physical mechanism to resolve this deleterious nature of the fixed crack model, a tribology-inspired three-dimensional (3d) interlocking mechanism has been proposed. Following the main trend of tribology (i.e., the science and engineering of interacting surfaces), we introduced the base fabric of solid particle-soft matrix to explain realistic interlocking over rough crack surfaces, and the adopted Gaussian distribution feeds random particle sizes to the entire domain. Validation against a well-documented rough crack experiment reveals promising accuracy of the proposed 3d interlocking model. A consumed energy-based damage model has been proposed for the weak correlation between the normal and shear stresses on the crack surfaces, and also for describing the nature of irrecoverable damage. Since the evaluation of the consumed energy is directly linked to the microscopic deformation, which can be efficiently tracked on the crack surfaces, the proposed damage model is believed to provide a more physical interpretation than existing damage mechanics, which fundamentally stem from mathematical derivation with few physical counterparts. Another novel point of the present work lies in the topological transition-based "smart" steel bar model, notably with evolving compressive buckling length. We presented a systematic framework of information flow between the key ingredients of composite materials (i.e., steel bar and its surrounding concrete elements). The smart steel model suggested can incorporate smooth transition during reversal loading, tensile rupture, early buckling after reversal from excessive tensile loading, and even compressive buckling. Especially, the buckling length is made to evolve according to the damage states of the surrounding elements of each bar, while all other dominant models leave the length unchanged. What lies behind all the aforementioned novel attempts is, of course, the problem-optimized parallel platform. In fact, the parallel computing in our field has been restricted to monotonic shock or blast loading with explicit algorithm which is characteristically feasible to be parallelized. In the present study, efficient parallelization strategies for the highly demanding implicit nonlinear finite element analysis (FEA) program for real-scale reinforced concrete (RC) structures under cyclic loading are proposed. Quantitative comparison of state-of-the-art parallel strategies, in terms of factorization, had been carried out, leading to the problem-optimized solver, which is successfully embracing the penalty method and banded nature. Particularly, the penalty method employed imparts considerable smoothness to the global response, which yields a practical superiority of the parallel triangular system solver over other advanced solvers such as parallel preconditioned conjugate gradient method. Other salient issues on parallelization are also addressed. The parallel platform established offers unprecedented access to simulations of real-scale structures, giving new understanding about the physics-based mechanisms adopted and probabilistic randomness at the entire system level. Particularly, the platform enables bold simulations of real-scale RC structures exposed to cyclic loading---H-shaped wall system and 4-story T-shaped wall system. The simulations show the desired capability of accurate prediction of global force-displacement responses, postpeak softening behavior, and compressive buckling of longitudinal steel bars. It is fascinating to see that intrinsic randomness of the 3d interlocking model appears to cause "localized" damage of the real-scale structures, which is consistent with reported observations in different fields such as granular media. Equipped with accuracy, stability and scalability as demonstrated so far, the parallel platform is believed to serve as a fertile ground for the introducing of further physical mechanisms into various research fields as well as the earthquake engineering community. In the near future, it can be further expanded to run in concert with reliable FEA programs such as FRAME3d or OPENSEES. Following the central notion of "multiscale" analysis technique, actual infrastructures exposed to extreme natural hazard can be successfully tackled by this next generation analysis tool---the harmonious union of the parallel platform and a general FEA program. At the same time, any type of experiments can be easily conducted by this "virtual laboratory."
Hyperspectral anomaly detection using Sony PlayStation 3
NASA Astrophysics Data System (ADS)
Rosario, Dalton; Romano, João; Sepulveda, Rene
2009-05-01
We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible. The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament, Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic approaches for applications requiring autonomous anomaly detection using passive sensors.
Chlumský, J; Striz, I; Terl, M; Vondracek, J
2006-01-01
Under Global Initiative for Asthma guidelines, the clinical control of disease activity and the adjustment of treatment in patients with asthma are based on symptoms, use of rescue medication, lung function and peak expiratory flow measurement (standard strategy). We investigated whether a strategy to reduce the number of sputum eosinophils (EOS strategy) gives better clinical control and a lower exacerbation rate compared with the standard strategy. Fifty-five patients with moderate to severe asthma entered this open, randomized, parallel-group study and visited the out-patient department every 3 months for 18 months. The dose of corticosteroids was adjusted according to the standard strategy or the percentage of sputum eosinophils (EOS strategy). During the study period, the EOS strategy led to a significantly lower incidence of asthma exacerbations compared with the standard strategy group (0.22 and 0.78 exacerbations per year per patient, respectively). There were significant differences between the strategies in time to first exacerbation.
Bujnowski, K; Getgood, A; Leitch, K; Farr, J; Dunning, C; Burkhart, T A
2018-02-01
It has been suggested that the use of a pilot-hole may reduce the risk of fracture to the lateral cortex. Therefore the purpose of this study was to determine the effect of a pilot hole on the strains and occurrence of fractures at the lateral cortex during the opening of a high tibial osteotomy (HTO) and post-surgery loading. A total of 14 cadaveric tibias were randomized to either a pilot hole (n = 7) or a no-hole (n = 7) condition. Lateral cortex strains were measured while the osteotomy was opened 9 mm and secured in place with a locking plate. The tibias were then subjected to an initial 800 N load that increased by 200 N every 5000 cycles, until failure or a maximum load of 2500 N. There was no significant difference in the strains on the lateral cortex during HTO opening between the pilot hole and no-hole conditions. Similarly, the lateral cortex and fixation plate strains were not significantly different during cyclic loading between the two conditions. Using a pilot hole did not significantly decrease the strains experienced at the lateral cortex, nor did it reduce the risk of fracture. The nonsignificant differences found here most likely occurred because the pilot hole merely translated the stress concentration laterally to a parallel point on the surface of the hole. Cite this article : K. Bujnowski, A. Getgood, K. Leitch, J. Farr, C. Dunning, T. A. Burkhart. A pilot hole does not reduce the strains or risk of fracture to the lateral cortex during and following a medial opening wedge high tibial osteotomy in cadaveric specimens. Bone Joint Res 2018;7:166-172. DOI: 10.1302/2046-3758.72.BJR-2017-0337.R1.
Laman, Moses; Moore, Brioni R.; Benjamin, John M.; Yadi, Gumul; Bona, Cathy; Warrel, Jonathan; Kattenberg, Johanna H.; Koleala, Tamarah; Manning, Laurens; Kasian, Bernadine; Robinson, Leanne J.; Sambale, Naomi; Lorry, Lina; Karl, Stephan; Davis, Wendy A.; Rosanas-Urgell, Anna; Mueller, Ivo; Siba, Peter M.; Betuela, Inoni; Davis, Timothy M. E.
2014-01-01
Background Artemisinin combination therapies (ACTs) with broad efficacy are needed where multiple Plasmodium species are transmitted, especially in children, who bear the brunt of infection in endemic areas. In Papua New Guinea (PNG), artemether-lumefantrine is the first-line treatment for uncomplicated malaria, but it has limited efficacy against P. vivax. Artemisinin-naphthoquine should have greater activity in vivax malaria because the elimination of naphthoquine is slower than that of lumefantrine. In this study, the efficacy, tolerability, and safety of these ACTs were assessed in PNG children aged 0.5–5 y. Methods and Findings An open-label, randomized, parallel-group trial of artemether-lumefantrine (six doses over 3 d) and artemisinin-naphthoquine (three daily doses) was conducted between 28 March 2011 and 22 April 2013. Parasitologic outcomes were assessed without knowledge of treatment allocation. Primary endpoints were the 42-d P. falciparum PCR-corrected adequate clinical and parasitologic response (ACPR) and the P. vivax PCR-uncorrected 42-d ACPR. Non-inferiority and superiority designs were used for falciparum and vivax malaria, respectively. Because the artemisinin-naphthoquine regimen involved three doses rather than the manufacturer-specified single dose, the first 188 children underwent detailed safety monitoring. Of 2,542 febrile children screened, 267 were randomized, and 186 with falciparum and 47 with vivax malaria completed the 42-d follow-up. Both ACTs were safe and well tolerated. P. falciparum ACPRs were 97.8% and 100.0% in artemether-lumefantrine and artemisinin-naphthoquine-treated patients, respectively (difference 2.2% [95% CI −3.0% to 8.4%] versus −5.0% non-inferiority margin, p = 0.24), and P. vivax ACPRs were 30.0% and 100.0%, respectively (difference 70.0% [95% CI 40.9%–87.2%], p<0.001). Limitations included the exclusion of 11% of randomized patients with sub-threshold parasitemias on confirmatory microscopy and direct observation of only morning artemether-lumefantrine dosing. Conclusions Artemisinin-naphthoquine is non-inferior to artemether-lumefantrine in PNG children with falciparum malaria but has greater efficacy against vivax malaria, findings with implications in similar geo-epidemiologic settings within and beyond Oceania. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12610000913077 Please see later in the article for the Editors' Summary PMID:25549086
Lee, Myungchul; Yoo, Juhyung; Kim, Jin Goo; Kyung, Hee-Soo; Bin, Seong-Il; Kang, Seung-Baik; Choi, Choong Hyeok; Moon, Young-Wan; Kim, Young-Mo; Han, Seong Beom; In, Yong; Choi, Chong Hyuk; Kim, Jongoh; Lee, Beom Koo; Cho, Sangsook
2017-12-01
The aim of this study was to evaluate the safety and analgesic efficacy of polmacoxib 2 mg versus placebo in a superiority comparison or versus celecoxib 200 mg in a noninferiority comparison in patients with osteoarthritis (OA). This study was a 6-week, phase III, randomized, double-blind, and parallel-group trial followed by an 18-week, single arm, open-label extension. Of the 441 patients with knee or hip OA screened, 362 were randomized; 324 completed 6 weeks of treatment and 220 completed the extension. Patients were randomized to receive oral polmacoxib 2 mg (n = 146), celecoxib 200 mg (n = 145), or placebo (n = 71) once daily for 6 weeks. During the extension, all participants received open-label polmacoxib 2 mg. The primary endpoint was the change in Western Ontario and McMaster Universities (WOMAC)-pain subscale score from baseline to week 6. Secondary endpoints included WOMAC-OA Index, OA subscales (pain, stiffness, and physical function) and Physician's and Subject's Global Assessments at weeks 3 and 6. Other outcome measures included adverse events (AEs), laboratory tests, vital signs, electrocardiograms, and physical examinations. After 6 weeks, the polmacoxib-placebo treatment difference was -2.5 (95% confidence interval [CI], -4.4 to -0.6; p = 0.011) and the polmacoxib-celecoxib treatment difference was 0.6 (CI, -0.9 to 2.2; p = 0.425). According to Physician's Global Assessments, more subjects were "much improved" at week 3 with polmacoxib than with celecoxib or placebo. Gastrointestinal and general disorder AEs occurred with a greater frequency with polmacoxib or celecoxib than with placebo. Polmacoxib 2 mg was relatively well tolerated and demonstrated efficacy superior to placebo and noninferior to celecoxib after 6 weeks of treatment in patients with OA. The results obtained during the 18-week trial extension with polmacoxib 2 mg were consistent with those observed during the 6-week treatment period, indicating that polmacoxib can be considered safe for long-term use based on this relatively small scale of study in a Korean population. More importantly, the results of this study showed that polmacoxib has the potential to be used as a pain relief drug with reduced gastrointestinal side effects compared to traditional nonsteroidal anti-inflammatory drugs for OA.
Accelerating a three-dimensional eco-hydrological cellular automaton on GPGPU with OpenCL
NASA Astrophysics Data System (ADS)
Senatore, Alfonso; D'Ambrosio, Donato; De Rango, Alessio; Rongo, Rocco; Spataro, William; Straface, Salvatore; Mendicino, Giuseppe
2016-10-01
This work presents an effective implementation of a numerical model for complete eco-hydrological Cellular Automata modeling on Graphical Processing Units (GPU) with OpenCL (Open Computing Language) for heterogeneous computation (i.e., on CPUs and/or GPUs). Different types of parallel implementations were carried out (e.g., use of fast local memory, loop unrolling, etc), showing increasing performance improvements in terms of speedup, adopting also some original optimizations strategies. Moreover, numerical analysis of results (i.e., comparison of CPU and GPU outcomes in terms of rounding errors) have proven to be satisfactory. Experiments were carried out on a workstation with two CPUs (Intel Xeon E5440 at 2.83GHz), one GPU AMD R9 280X and one GPU nVIDIA Tesla K20c. Results have been extremely positive, but further testing should be performed to assess the functionality of the adopted strategies on other complete models and their ability to fruitfully exploit parallel systems resources.
Stanislawski, Larry V.; Survila, Kornelijus; Wendel, Jeffrey; Liu, Yan; Buttenfield, Barbara P.
2018-01-01
This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.
Performance evaluation of OpenFOAM on many-core architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz
In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.
Spacer grid assembly and locking mechanism
Snyder, Jr., Harold J.; Veca, Anthony R.; Donck, Harry A.
1982-01-01
A spacer grid assembly is disclosed for retaining a plurality of fuel rods in substantially parallel spaced relation, the spacer grids being formed with rhombic openings defining contact means for engaging from one to four fuel rods arranged in each opening, the spacer grids being of symmetric configuration with their rhombic openings being asymmetrically offset to permit inversion and relative rotation of the similar spacer grids for improved support of the fuel rods. An improved locking mechanism includes tie bars having chordal surfaces to facilitate their installation in slotted circular openings of the spacer grids, the tie rods being rotatable into locking engagement with the slotted openings.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
Concurrent computation of attribute filters on shared memory parallel machines.
Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold
2008-10-01
Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.
Burri, Christian; Yeramian, Patrick D.; Merolle, Ada; Serge, Kazadi Kyanza; Mpanya, Alain; Lutumba, Pascal; Mesu, Victor Kande Betu Ku; Lubaki, Jean-Pierre Fina; Mpoto, Alfred Mpoo; Thompson, Mark; Munungu, Blaise Fungula; Josenando, Théophilo; Bernhard, Sonja C.; Olson, Carol A.; Blum, Johannes; Tidwell, Richard R.; Pohlig, Gabriele
2016-01-01
Background Sleeping sickness (human African trypanosomiasis [HAT]) is caused by protozoan parasites and characterized by a chronic progressive course, which may last up to several years before death. We conducted two Phase 2 studies to determine the efficacy and safety of oral pafuramidine in African patients with first stage HAT. Methods The Phase 2a study was an open-label, non-controlled, proof-of-concept study where 32 patients were treated with 100 mg of pafuramidine orally twice a day (BID) for 5 days at two trypanosomiasis reference centers (Angola and the Democratic Republic of the Congo [DRC]) between August 2001 and November 2004. The Phase 2b study compared pafuramidine in 41 patients versus standard pentamidine therapy in 40 patients. The Phase 2b study was open-label, parallel-group, controlled, randomized, and conducted at two sites in the DRC between April 2003 and February 2007. The Phase 2b study was then amended to add an open-label sequence (Phase 2b-2), where 30 patients received pafuramidine for 10 days. The primary efficacy endpoint was parasitologic cure at 24 hours (Phase 2a) or 3 months (Phase 2b) after treatment completion. The primary safety outcome was the rate of occurrence of World Health Organization Toxicity Scale Grade 3 or higher adverse events. All subjects provided written informed consent. Findings/Conclusion Pafuramidine for the treatment of first stage HAT was comparable in efficacy to pentamidine after 10 days of dosing. The cure rates 3 months post-treatment were 79% in the 5-day pafuramidine, 100% in the 7-day pentamidine, and 93% in the 10-day pafuramidine groups. In Phase 2b, the percentage of patients with at least 1 treatment-emergent adverse event was notably higher after pentamidine treatment (93%) than pafuramidine treatment for 5 days (25%) and 10 days (57%). These results support continuation of the development program for pafuramidine into Phase 3. PMID:26881924
Burri, Christian; Yeramian, Patrick D; Allen, James L; Merolle, Ada; Serge, Kazadi Kyanza; Mpanya, Alain; Lutumba, Pascal; Mesu, Victor Kande Betu Ku; Bilenge, Constantin Miaka Mia; Lubaki, Jean-Pierre Fina; Mpoto, Alfred Mpoo; Thompson, Mark; Munungu, Blaise Fungula; Manuel, Francisco; Josenando, Théophilo; Bernhard, Sonja C; Olson, Carol A; Blum, Johannes; Tidwell, Richard R; Pohlig, Gabriele
2016-02-01
Sleeping sickness (human African trypanosomiasis [HAT]) is caused by protozoan parasites and characterized by a chronic progressive course, which may last up to several years before death. We conducted two Phase 2 studies to determine the efficacy and safety of oral pafuramidine in African patients with first stage HAT. The Phase 2a study was an open-label, non-controlled, proof-of-concept study where 32 patients were treated with 100 mg of pafuramidine orally twice a day (BID) for 5 days at two trypanosomiasis reference centers (Angola and the Democratic Republic of the Congo [DRC]) between August 2001 and November 2004. The Phase 2b study compared pafuramidine in 41 patients versus standard pentamidine therapy in 40 patients. The Phase 2b study was open-label, parallel-group, controlled, randomized, and conducted at two sites in the DRC between April 2003 and February 2007. The Phase 2b study was then amended to add an open-label sequence (Phase 2b-2), where 30 patients received pafuramidine for 10 days. The primary efficacy endpoint was parasitologic cure at 24 hours (Phase 2a) or 3 months (Phase 2b) after treatment completion. The primary safety outcome was the rate of occurrence of World Health Organization Toxicity Scale Grade 3 or higher adverse events. All subjects provided written informed consent. Pafuramidine for the treatment of first stage HAT was comparable in efficacy to pentamidine after 10 days of dosing. The cure rates 3 months post-treatment were 79% in the 5-day pafuramidine, 100% in the 7-day pentamidine, and 93% in the 10-day pafuramidine groups. In Phase 2b, the percentage of patients with at least 1 treatment-emergent adverse event was notably higher after pentamidine treatment (93%) than pafuramidine treatment for 5 days (25%) and 10 days (57%). These results support continuation of the development program for pafuramidine into Phase 3.
Lorenzo, Armando J; Lynch, Johanne; Matava, Clyde; El-Beheiry, Hossam; Hayes, Jason
2014-07-01
Regional analgesic techniques are commonly used in pediatric urology. Ultrasound guided transversus abdominis plane block has recently gained popularity. However, there is a paucity of information supporting a benefit over regional field infiltration. We present a parallel group, randomized, controlled trial evaluating ultrasound guided transversus abdominis plane block superiority over surgeon delivered regional field infiltration for children undergoing open pyeloplasty at a tertiary referral center. Following ethics board approval and registration, children 0 to 6 years old were recruited and randomized to undergo perioperative transversus abdominis plane block or regional field infiltration for early post-pyeloplasty pain control. General anesthetic delivery, surgical technique and postoperative analgesics were standardized. A blinded assessor regularly captured pain scores in the recovery room using the FLACC (Face, Legs, Activity, Cry, Consolability) scale. The primary outcome was the need for rescue morphine administration based on a FLACC score of 3 or higher. Two pediatric urologists performed 57 pyeloplasties during a 2.5-year period, enrolling 32 children (16 in each group, balanced for age and weight). There were statistically significant differences in the number of children requiring rescue morphine administration (13 of 16 receiving transversus abdominis plane block and 6 of 16 receiving regional field infiltration, p = 0.011), mean ± SD total morphine consumption (0.066 ± 0.051 vs 0.028 ± 0.040 mg/kg, p = 0.021) and mean ± SD pain scores (5 ± 5 vs 2 ± 3, p = 0.043) in the recovery room, in favor of surgeon administered regional field infiltration. No local anesthetic specific adverse events were noted. Ultrasound guided transversus abdominis plane block is not superior to regional field infiltration with bupivacaine as a strategy to minimize early opioid requirements following open pyeloplasty in children. Instead, our data suggest that surgeon delivered regional field infiltration provides better pain control. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Open quantum random walks: Bistability on pure states and ballistically induced diffusion
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2013-12-01
Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.
2011-01-01
either the CTA group (n 12) or the control group (n 14). The CTA group learned the open cricothyrotomy procedure using the CTA curriculum. The...completed a 6-item pretest that posed open - ended questions regarding actions and decisions required to conduct the procedure given a specific... posttest assessing their knowl- edge of the procedure. Parallel forms of the pretest and post- test instruments were developed using different case scenar
Parallel Optical Random Access Memory (PORAM)
NASA Technical Reports Server (NTRS)
Alphonse, G. A.
1989-01-01
It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Moore, Simon C; Alam, M Fasihul; Heikkinen, Marjukka; Hood, Kerenza; Huang, Chao; Moore, Laurence; Murphy, Simon; Playle, Rebecca; Shepherd, Jonathan; Shovelton, Claire; Sivarajasingam, Vaseekaran; Williams, Anne
2017-11-01
Premises licensed for the sale and consumption of alcohol can contribute to levels of assault-related injury through poor operational practices that, if addressed, could reduce violence. We tested the real-world effectiveness of an intervention designed to change premises operation, whether any intervention effect changed over time, and the effect of intervention dose. A parallel randomized controlled trial with the unit of allocation and outcomes measured at the level of individual premises. All premises (public houses, nightclubs or hotels with a public bar) in Wales, UK. A randomly selected subsample (n = 600) of eligible premises (that had one or more violent incidents recorded in police-recorded crime data; n = 837) were randomized into control and intervention groups. Intervention premises were audited by Environmental Health Practitioners who identified risks for violence and provided feedback by varying dose (informal, through written advice, follow-up visits) on how risks could be addressed. Control premises received usual practice. Police data were used to derive a binary variable describing whether, on each day premises were open, one or more violent incidents were evident over a 455-day period following randomization. Due to premises being unavailable at the time of intervention delivery 208 received the intervention and 245 were subject to usual practice in an intention-to-treat analysis. The intervention was associated with an increase in police recorded violence compared to normal practice (hazard ratio = 1.34, 95% confidence interval = 1.20-1.51). Exploratory analyses suggested that reduced violence was associated with greater intervention dose (follow-up visits). An Environmental Health Practitioner-led intervention in premises licensed for the sale and on-site consumption of alcohol resulted in an increase in police recorded violence. © 2017 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Alam, M. Fasihul; Heikkinen, Marjukka; Hood, Kerenza; Huang, Chao; Moore, Laurence; Murphy, Simon; Playle, Rebecca; Shepherd, Jonathan; Shovelton, Claire; Sivarajasingam, Vaseekaran; Williams, Anne
2017-01-01
Abstract Background and Aims Premises licensed for the sale and consumption of alcohol can contribute to levels of assault‐related injury through poor operational practices that, if addressed, could reduce violence. We tested the real‐world effectiveness of an intervention designed to change premises operation, whether any intervention effect changed over time, and the effect of intervention dose. Design A parallel randomized controlled trial with the unit of allocation and outcomes measured at the level of individual premises. Setting All premises (public houses, nightclubs or hotels with a public bar) in Wales, UK. Participants A randomly selected subsample (n = 600) of eligible premises (that had one or more violent incidents recorded in police‐recorded crime data; n = 837) were randomized into control and intervention groups. Intervention and comparator Intervention premises were audited by Environmental Health Practitioners who identified risks for violence and provided feedback by varying dose (informal, through written advice, follow‐up visits) on how risks could be addressed. Control premises received usual practice. Measurements Police data were used to derive a binary variable describing whether, on each day premises were open, one or more violent incidents were evident over a 455‐day period following randomization. Findings Due to premises being unavailable at the time of intervention delivery 208 received the intervention and 245 were subject to usual practice in an intention‐to‐treat analysis. The intervention was associated with an increase in police recorded violence compared to normal practice (hazard ratio = 1.34, 95% confidence interval = 1.20–1.51). Exploratory analyses suggested that reduced violence was associated with greater intervention dose (follow‐up visits). Conclusion An Environmental Health Practitioner‐led intervention in premises licensed for the sale and on‐site consumption of alcohol resulted in an increase in police recorded violence. PMID:28543914
Su, Qing; Liu, Chao; Zheng, Hongting; Zhu, Jun; Li, Peng Fei; Qian, Lei; Yang, Wen Ying
2017-06-01
Premixed insulins are recommended starter insulins in Chinese patients after oral antihyperglycemic medication (OAM) failure. In the present study, we compared the efficacy and safety of insulin lispro mix 25 (LM25) twice daily (b.i.d.) and insulin lispro mix 50 (LM50) b.i.d. as a starter insulin regimen in Chinese patients with type 2 diabetes mellitus (T2DM) who had inadequate glycemic control with OAMs. The primary efficacy outcome in the present open-label parallel randomized clinical trial was change in HbA1c from baseline to 26 weeks. Patients were randomized in a ratio of 1: 1 to LM25 (n = 80) or LM50 (n = 76). A mixed-effects model with repeated measures was used to analyze continuous variables. The Cochran-Mantel-Haenszel test with stratification factor was used to analyze categorical variables. At the end of the study, LM50 was more efficacious than LM25 in reducing mean HbA1c levels (least-squares [LS] mean difference 0.48; 95 % confidence interval [CI] 0.22, 0.74; P < 0.001). More subjects in the LM50 than LM25 group achieved HbA1c targets of <7.0 % (72.4 % vs 45.0 %; P = 0.001) or ≤6.5 % (52.6 % vs 20.0 %; P < 0.001). Furthermore, LM50 was more effective than LM25 at reducing HbA1c in patients with baseline HbA1c, blood glucose excursion, and postprandial glucose greater than or equal to median levels (P ≤ 0.001). The rate and incidence of hypoglycemic episodes and increase in weight at the end of the study were similar between treatment groups. In Chinese patients with T2DM, LM50 was more efficacious than LM25 as a starter insulin. © 2016 The Authors. Journal of Diabetes published by John Wiley & Sons Australia, Ltd and Ruijin Hospital, Shanghai Jiaotong University School of Medicine.
Ovulatory effects of three oral contraceptive regimens: a randomized, open-label, descriptive trial.
Seidman, Larry; Kroll, Robin; Howard, Brandon; Ricciotti, Nancy; Hsieh, Jennifer; Weiss, Herman
2015-06-01
This study describes ovarian activity suppression of a 21/7-active low-dose combined oral contraceptive (COC) regimen that included only ethinyl estradiol (EE) during the traditional hormone-free interval (HFI) and two commercially available 28-day regimens, a 24/4 and a 21/7 regimen. The randomized, open-label, parallel-group descriptive study was conducted at two US sites. Healthy, reproductive-aged women (n=146) were randomized to one of three groups for three consecutive 28-day cycles, as follows: treatment 1 (n=39 completed): 21/7-active COC [21 days of 150 mcg desogestrel (DSG)/20 mcg EE, followed by 7 days of 10 mcg EE (DSG/EE+7 days EE)], treatment 2 (n=39 completed): 24 days of 3mg drospirenone (DRSP)/20 mcg EE, followed by 4 placebo (PBO)-pill days (DRSP/EE+4 days PBO) and treatment 3 (n=42 completed): 21 days of 100 mcg levonorgestrel (LNG)/20 mcg EE, followed by 7 PBO-pill days (LNG/EE+7 days PBO). The primary outcome was ovarian activity suppression assessed by transvaginal ultrasound and serum hormone concentrations and classified using the Hoogland and Skouby (H/S) method. Ovarian activity rate (H/S grade 4 or 5) was low for all three treatments: 0% [95% confidence interval (CI) 0-2.8] for DSG/EE+7 days EE, 1% (95% CI 0.2-5.2) for DRSP/EE+4days PBO and 1% (95% CI 0-3.9) for LNG/EE+7 days PBO. All three treatments showed similar suppression of serum progesterone, 17β-estradiol, follicle-stimulating hormone and luteinizing hormone levels. The 21/7-active low-dose COC regimen (DSG/EE+7 days EE) showed ovarian activity suppression that was similar to the 24/4 (DRSP/EE+4 days PBO) and 21/7 (LNG/EE+7days PBO) regimens. The 21/7-active low-dose COC regimen (DSG/EE+7 days EE) that included only EE during the traditional HFI showed suppression of ovarian follicular activity that was similar to the 24/4 (DRSP/EE+4days PBO) and the 21/7 (LNG/EE+7 days PBO) comparator regimens. Copyright © 2015 Elsevier Inc. All rights reserved.
Revadigar, Neelambika; Manobianco, Brittany E.
2018-01-01
Background: Benzodiazepines (BZDs) are among the most prescribed sedative hypnotics and among the most misused and abused medications by patients, in parallel with opioids. It is estimated that more than 100 million Benzodiazepine (BZD) prescriptions were written in the United States in 2009. While medically useful, BZDs are potentially dangerous. The co-occurring abuse of opioids and BZD, as well as increases in BZD abuse, tolerance, dependence, and short- and long-term side effects, have prompted a worldwide discussion about the challenging aspects of medically managing the discontinuation of BZDs. Abrupt cessation can cause death. This paper addresses the challenges of medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse and associated medical complications. The focus of this review is on the challenges of several medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse, and associated medical complications. Methods: An electronic search was performed of Medline, Worldwide Science, Directory of Open Access Journals, Embase, Cochrane Library, Google Scholar, PubMed Central, and PubMed from 1990 to 2017. The review includes double-blind, placebo-controlled studies for the most part, open-label pilot studies, and animal studies, in addition to observational research. We expand the search to review articles, naturalistic studies, and to a lesser extent, letters to the editor/case reports. We exclude abstract and poster presentations, books, and book chapters. Results: The efficacy of these medications is not robust. While some of these medicines are relatively safe to use, some of them have a narrow therapeutic index, with severe, life-threatening side effects. Randomized studies have been limited. There is a paucity of comparative research. The review has several limitations. The quality of the documents varies according to whether they are randomized studies, nonrandomized studies, naturalistic studies, pilot studies, letters to the editors, or case reports. Conclusions: The use of medications for the discontinuation of BZDs seems appropriate. It is a challenge that requires further investigation through randomized clinical trials to maximize efficacy and to minimize additional risks and side effects. PMID:29713452
Fluyau, Dimy; Revadigar, Neelambika; Manobianco, Brittany E
2018-05-01
Benzodiazepines (BZDs) are among the most prescribed sedative hypnotics and among the most misused and abused medications by patients, in parallel with opioids. It is estimated that more than 100 million Benzodiazepine (BZD) prescriptions were written in the United States in 2009. While medically useful, BZDs are potentially dangerous. The co-occurring abuse of opioids and BZD, as well as increases in BZD abuse, tolerance, dependence, and short- and long-term side effects, have prompted a worldwide discussion about the challenging aspects of medically managing the discontinuation of BZDs. Abrupt cessation can cause death. This paper addresses the challenges of medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse and associated medical complications. The focus of this review is on the challenges of several medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse, and associated medical complications. An electronic search was performed of Medline, Worldwide Science, Directory of Open Access Journals, Embase, Cochrane Library, Google Scholar, PubMed Central, and PubMed from 1990 to 2017. The review includes double-blind, placebo-controlled studies for the most part, open-label pilot studies, and animal studies, in addition to observational research. We expand the search to review articles, naturalistic studies, and to a lesser extent, letters to the editor/case reports. We exclude abstract and poster presentations, books, and book chapters. The efficacy of these medications is not robust. While some of these medicines are relatively safe to use, some of them have a narrow therapeutic index, with severe, life-threatening side effects. Randomized studies have been limited. There is a paucity of comparative research. The review has several limitations. The quality of the documents varies according to whether they are randomized studies, nonrandomized studies, naturalistic studies, pilot studies, letters to the editors, or case reports. The use of medications for the discontinuation of BZDs seems appropriate. It is a challenge that requires further investigation through randomized clinical trials to maximize efficacy and to minimize additional risks and side effects.
Toward Enhancing OpenMP's Work-Sharing Directives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, B M; Huang, L; Jin, H
2006-05-17
OpenMP provides a portable programming interface for shared memory parallel computers (SMPs). Although this interface has proven successful for small SMPs, it requires greater flexibility in light of the steadily growing size of individual SMPs and the recent advent of multithreaded chips. In this paper, we describe two application development experiences that exposed these expressivity problems in the current OpenMP specification. We then propose mechanisms to overcome these limitations, including thread subteams and thread topologies. Thus, we identify language features that improve OpenMP application performance on emerging and large-scale platforms while preserving ease of programming.
ERIC Educational Resources Information Center
Guillemont, Juliette; Cogordan, Chloé; Nalpas, Bertrand; Nguyen-Thanh, Vi?t; Richard, Jean-Baptiste; Arwidson, Pierre
2017-01-01
This study aims to evaluate the effectiveness of a web-based intervention to reduce alcohol consumption among hazardous drinkers. A two-group parallel randomized controlled trial was conducted among adults identified as hazardous drinkers according to the Alcohol Use Disorders Identification Test. The intervention delivers personalized normative…
Random Walk Method for Potential Problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.
2002-01-01
A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.
Supervised Home Training of Dialogue Skills in Chronic Aphasia: A Randomized Parallel Group Study
ERIC Educational Resources Information Center
Nobis-Bosch, Ruth; Springer, Luise; Radermacher, Irmgard; Huber, Walter
2011-01-01
Purpose: The aim of this study was to prove the efficacy of supervised self-training for individuals with aphasia. Linguistic and communicative performance in structured dialogues represented the main study parameters. Method: In a cross-over design for randomized matched pairs, 18 individuals with chronic aphasia were examined during 12 weeks of…
ERIC Educational Resources Information Center
O'Callaghan, Paul; McMullen, John; Shannon, Ciaran; Rafferty, Harry; Black, Alastair
2013-01-01
Objective: To assess the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) delivered by nonclinical facilitators in reducing posttraumatic stress, depression, and anxiety and conduct problems and increasing prosocial behavior in a group of war-affected, sexually exploited girls in a single-blind, parallel-design, randomized,…
CCMC Modeling of Magnetic Reconnection in Electron Diffusion Region Events
NASA Astrophysics Data System (ADS)
Marshall, A.; Reiff, P. H.; Daou, A.; Webster, J.; Sazykin, S. Y.; Kuznetsova, M.; Grocer, A.; Rastaetter, L.; Welling, D. T.; DeZeeuw, D.; Russell, C. T.
2017-12-01
We use the unprecedented spatial and temporal cadence of the Magnetospheric Multiscale Mission to study four electron diffusion events, and infer important physical properties of their respective magnetic reconnection processes. We couple these observations with numerical simulations using tools such as SWMF with RCM, and RECON-X, from the Coordinated Community Modeling Center, to provide, for a first time, a coherent temporal description of the magnetic reconnection process through tracing the coupling of IMF and closed Earth magnetic field lines, leading to the corresponding polar cap open field lines. We note that the reconnection geometry is far from slab-like: the IMF field lines drape over the magnetopause, lending to a stretching of the field lines. The stretched field lines become parallel to, and merge with the dayside separator. Surprisingly, the inner closed field lines also distort to become parallel to the separator. This parallel geometry allows a very sharp boundary between open and closed field lines. In three of the events, the MMS location was near the predicted separator location; in the fourth it was near the outflow region.
Podoleanu, Adrian Gh; Bradu, Adrian
2013-08-12
Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer
This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.
Johnstone, Jeanette M; Roake, Chelsea; Sheikh, Ifrah; Mole, Ashlie; Nigg, Joel T; Oken, Barry
2016-12-15
Adolescents are in a high-risk period developmentally, in terms of susceptibility to stress. A mindfulness intervention represents a potentially useful strategy for developing cognitive and emotion regulation skills associated with successful stress coping. Mindfulness strategies have been used successfully for emotional coping in adults, but are not as well studied in youth. This article details a novel proposal for the design of an 8-week randomized study to evaluate a high school-based mindfulness curriculum delivered as part of a two semester health class. A wellness education intervention is proposed as an active control, along with a waitlist control condition. All students enrolled in a sophomore (10 th grade) health class at a private suburban high school will be invited to participate ( n = 300). Pre-test assessments will be obtained by youth report, parent ratings, and on-site behavioral testing. The assessments will evaluate baseline stress, mood, emotional coping, controlled attention, and working memory. Participants, divided into 13 classrooms, will be randomized into one of three conditions, by classroom: A mindfulness intervention, an active control (wellness education), and a passive control (waitlist). Waitlisted participants will receive one of the interventions in the following term. Intervention groups will meet weekly for 8 weeks during regularly scheduled health classes. Immediate post-tests will be conducted, followed by a 60-day post-test. It is hypothesized that the mindfulness intervention will outperform the other conditions with regard to the adolescents' mood, attention and response to stress.
Saito, Geisi; Zapata, Rodrigo; Rivera, Rodrigo; Zambrano, Héctor; Rojas, David; Acevedo, Hernán; Ravera, Franco; Mosquera, John; Vásquez, Juan E; Mura, Jorge
2017-01-01
Functional recovery after aneurysmal subarachnoid hemorrhage (SAH) remains a significant problem. We tested a novel therapeutic approach with long-chain omega-3 polyunsaturated fatty acids (n-3 PUFAs) to assess the safety and feasibility of an effectiveness trial. We conducted a multicentre, parallel, randomized, open-label pilot trial. Patients admitted within 72 hours after SAH with modified Fisher scale scores of 3 or 4 who were selected for scheduled aneurysm clipping were allocated to receive either n-3 PUFA treatment (parenteral perioperative: 5 days; oral: 8 weeks) plus usual care or usual care alone. Exploratory outcome measures included major postoperative intracranial bleeding complications (PIBCs), cerebral infarction caused by delayed cerebral ischemia, shunt-dependent hydrocephalus, and consent rate. The computed tomography evaluator was blinded to the group assignment. Forty-one patients were randomized, but one patient had to be excluded after allocation. Twenty patients remained for intention to treat analysis in each trial arm. No PIBs (95% confidence interval [CI]: 0.00 to 0.16) or other unexpected harm were observed in the intervention group (IG). No patient suspended the intervention due to side effects. There was a trend towards improvements in all benefit-related outcomes in the IG. The overall consent rate was 0.91 (95% CI: 0.78 to 0.96), and there was no consent withdrawal. Although the balance between the benefit and harm of the intervention appears highly favourable, further testing on SAH patients is required. We recommend proceeding with amendments in a dose-finding trial to determine the optimal duration of parenteral treatment.
Effects of professional oral health care on elderly: randomized trial.
Morino, T; Ookawa, K; Haruta, N; Hagiwara, Y; Seki, M
2014-11-01
To better understand the role of the professional oral health care for elderly in improving geriatric oral health, the effects of short-term professional oral health care (once per week for 1 month) on oral microbiological parameters were assessed. Parallel, open-labelled, randomize-controlled trial was undertaken in a nursing home for elderly in Shizuoka, Japan. Thirty-four dentate elderly over 74 years were randomly assigned from ID number to the intervention (17/34) and control (17/34) groups. The outcomes were changes in oral microbiological parameters (number of bacteria in unstimulated saliva; whole bacteria, Streptococcus, Fusobacterium and Prevotella: opportunistic pathogens detection: and index of oral hygiene evaluation [Dental Plaque Index, DPI]) within the intervention period. Each parameter was evaluated at before and after intervention period. Four elderly were lost from mortality (1), bone fracture (1), refused to participate (1) and multi-antibiotics usage (1). Finally, 30 elderly were analysed (14/intervention and 16/control). At baseline, no difference was found between the control and intervention groups. After the intervention period, the percentage of Streptococcus species increased significantly in the intervention group (Intervention, 86% [12/14]; Control, 50% [8/16]: Fisher's, right-tailed, P < 0.05). Moreover, DPI significantly improved in the intervention group (Intervention, 57% [8/14]; Control, 13% [2/16]: Fisher's, two-tailed, P < 0.05). The improvement in DPI extended for 3 months after intervention. None of side effects were reported. The short-term professional oral health care can improve oral conditions in the elderly. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wheel of Wellness Counseling in Community Dwelling, Korean Elders: A Randomized, Controlled Trial.
Kwon, So Hi
2015-06-01
The purpose of this study was to investigate the effects of Wheel of Wellness counseling on wellness lifestyle, depression, and health-related quality of life in community dwelling elderly people. A parallel, randomized controlled, open label, trial was conducted. Ninety-three elderly people in a senior welfare center were randomly assigned to two groups: 1) A Wheel of Wellness counseling intervention group (n=49) and 2) a no-treatment control group (n=44). Wheel of Wellness counseling consisted of structured, individual counseling based on the Wheel of Wellness model and provided once a week for four weeks. Wellness lifestyle, depression, and health-related quality of life were assessed pre-and post-test in both groups. Data from 89 participants were analyzed. For participants in the experimental group, there was a significant improvement on all of the wellness-lifestyle subtasks except realistic beliefs. Perceived wellness and depression significantly improved after the in the experimental group (n=43) compared to the control group (n=46) from pre- to post-test in the areas of sense of control (p=.033), nutrition (p=.017), exercise (p=.039), self-care (p<.001), stress management (p=.017), work (p=.011), perceived wellness (p=.019), and depression (p=.031). One participant in the intervention group discontinued the intervention due to hospitalization and three in the control group discontinued the sessions. Wheel of Wellness counseling was beneficial in enhancing wellness for the community-dwelling elderly people. Research into long-term effects of the intervention and health outcomes is recommended.
Aman, Michael G; Hollway, Jill A; Veenstra-VanderWeele, Jeremy; Handen, Benjamin L; Sanders, Kevin B; Chan, James; Macklin, Eric; Arnold, L Eugene; Wong, Taylor; Newsom, Cassandra; Hastie Adams, Rianne; Marler, Sarah; Peleg, Naomi; Anagnostou, Evdokia A
2018-05-01
Studies in humans and rodents suggest that metformin, a medicine typically used to treat type 2 diabetes, may have beneficial effects on memory. We sought to determine whether metformin improved spatial or verbal memory in children with autism spectrum disorder (ASD) and overweight associated with atypical antipsychotic use. We studied the effects of metformin (Riomet ® ) concentrate on spatial and verbal memory in 51 youth with ASD, ages 6 through 17 years, who were taking atypical antipsychotic medications, had gained significant weight, and were enrolled in a trial of metformin for weight management. Phase 1 was a 16-week, randomized, double-blind, placebo-controlled, parallel-group comparison of metformin (500-850 mg given twice a day) versus placebo. During Phase 2, all participants took open-label metformin from week 17 through week 32. We assessed spatial and verbal memory using the Neuropsychological Assessment 2nd Edition (NEPSY-II) and a modified children's verbal learning task. No measures differed between participants randomized to metformin versus placebo, at either 16 or 32 weeks, after adjustment for multiple comparisons. Sixteen-week change in memory for spatial location on the NEPSY-II was nominally better among participants randomized to placebo. However, patterns of treatment response across all measures revealed no systematic differences in performance, suggesting that metformin had no effect on spatial or verbal memory in these children. Although further study is needed to support these null effects, the overall impression is that metformin does not affect memory in overweight youth with ASD who were taking atypical antipsychotic medications.
Kato, Sawako; Ando, Masahiko; Kondo, Takaaki; Yoshida, Yasuko; Honda, Hiroyuki; Maruyama, Shoichi
2018-05-01
Modification of lifestyle habits, including diet and physical activity, is essential for the prevention and control of type 2 diabetes mellitus (T2DM) in elderly patients. However, individualized treatment is more critical for the elderly than for general patients. This study aimed to determine lifestyle interventions that resulted in lowering hemoglobin A 1c (HbA 1c ) in Japanese pre- and early diabetic elderly subjects. The BEST-LIFE trial is an ongoing, open-label, 6-month, randomized (1:1) parallel group trial. Subjects with HbA 1c of ≥5.6%-randomly assigned to the intervention or control group -use wearable monitoring devices loaded with Internet of things (IoT) systems that aids them with self-management and obtaining monthly remote health guidance from a public health nurse. The primary outcome is changes in HbA 1c after a 6-month intervention relative to the baseline values. The secondary outcome is the change of behavior modification stages. The background, rationale, and study design of this trial are also presented. One hundred forty-five subjects have already been enrolled in this lifestyle intervention program, which will end in 2019. The BEST-LIFE trial will provide new evidence regarding the effectiveness and safety of our program on lowering HbA 1c in elderly subjects with T2DM. It will also investigate whether information communication technology tools and monitoring devices loaded with IoT can support health care in elderly subjects. The trial registration number is UMIN-CTR: UMIN 000023356.
Niikura, Ryota; Nagata, Naoyoshi; Yamada, Atsuo; Doyama, Hisashi; Shiratori, Yasutoshi; Nishida, Tsutomu; Kiyotoki, Shu; Yada, Tomoyuki; Fujita, Tomoki; Sumiyoshi, Tetsuya; Hasatani, Kenkei; Mikami, Tatsuya; Honda, Tetsuro; Mabe, Katsuhiro; Hara, Kazuo; Yamamoto, Katsumi; Takeda, Mariko; Takata, Munenori; Tanaka, Mototsugu; Shinozaki, Tomohiro; Fujishiro, Mitsuhiro; Koike, Kazuhiko
2018-04-03
The clinical benefit of early colonoscopy within 24 h of arrival in patients with severe acute lower gastrointestinal bleeding (ALGIB) remains controversial. This trial will compare early colonoscopy (performed within 24 h) versus elective colonoscopy (performed between 24 and 96 h) to examine the identification rate of stigmata of recent hemorrhage (SRH) in ALGIB patients. We hypothesize that, compared with elective colonoscopy, early colonoscopy increases the identification of SRH and subsequently improves clinical outcomes. This trial is an investigator-initiated, multicenter, randomized, open-label, parallel-group trial examining the superiority of early colonoscopy over elective colonoscopy (standard therapy) in ALGIB patients. The primary outcome measure is the identification of SRH. Secondary outcomes include 30-day rebleeding, success of endoscopic treatment, need for additional endoscopic examination, need for interventional radiology, need for surgery, need for transfusion during hospitalization, length of stay, 30-day thrombotic events, 30-day mortality, preparation-related adverse events, and colonoscopy-related adverse events. The sample size will enable detection of a 9% SRH rate in elective colonoscopy patients and a SRH rate of ≥ 26% in early colonoscopy patients with a risk of type I error of 5% and a power of 80%. This trial will provide high-quality data on the benefits and risks of early colonoscopy in ALGIB patients. UMIN-CTR Identifier, UMIN000021129 . Registered on 21 February 2016; ClinicalTrials.gov Identifier, NCT03098173 . Registered on 24 March 2017.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
A randomized controlled trial of intranasal ketamine in migraine with prolonged aura.
Afridi, Shazia K; Giffin, Nicola J; Kaube, Holger; Goadsby, Peter J
2013-02-12
The aim of our study was to test the hypothesis that ketamine would affect aura in a randomized controlled double-blind trial, and thus to provide direct evidence for the role of glutamatergic transmission in human aura. We performed a double-blinded, randomized parallel-group controlled study investigating the effect of 25 mg intranasal ketamine on migraine with prolonged aura in 30 migraineurs using 2 mg intranasal midazolam as an active control. Each subject recorded data from 3 episodes of migraine. Eighteen subjects completed the study. Ketamine reduced the severity (p = 0.032) but not duration of aura in this group, whereas midazolam had no effect. These data provide translational evidence for the potential importance of glutamatergic mechanisms in migraine aura and offer a pharmacologic parallel between animal experimental work on cortical spreading depression and the clinical problem. This study provides class III evidence that intranasal ketamine is effective in reducing aura severity in patients with migraine with prolonged aura.
Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].
Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N
2014-09-20
Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.
Langford, R M; Mares, J; Novotna, A; Vachova, M; Novakova, I; Notcutt, W; Ratcliffe, S
2013-04-01
Central neuropathic pain (CNP) occurs in many multiple sclerosis (MS) patients. The provision of adequate pain relief to these patients can very difficult. Here we report the first phase III placebo-controlled study of the efficacy of the endocannabinoid system modulator delta-9-tetrahydrocannabinol (THC)/cannabidiol (CBD) oromucosal spray (USAN name, nabiximols; Sativex, GW Pharmaceuticals, Salisbury, Wiltshire, UK), to alleviate CNP. Patients who had failed to gain adequate analgesia from existing medication were treated with THC/CBD spray or placebo as an add-on treatment, in a double-blind manner, for 14 weeks to investigate the efficacy of the medication in MS-induced neuropathic pain. This parallel-group phase of the study was then followed by an 18-week randomized-withdrawal study (14-week open-label treatment period plus a double-blind 4-week randomized-withdrawal phase) to investigate time to treatment failure and show maintenance of efficacy. A total of 339 patients were randomized to phase A (167 received THC/CBD spray and 172 received placebo). Of those who completed phase A, 58 entered the randomized-withdrawal phase. The primary endpoint of responder analysis at the 30 % level at week 14 of phase A of the study was not met, with 50 % of patients on THC/CBD spray classed as responders at the 30 % level compared to 45 % of patients on placebo (p = 0.234). However, an interim analysis at week 10 showed a statistically significant treatment difference in favor of THC/CBD spray at this time point (p = 0.046). During the randomized-withdrawal phase, the primary endpoint of time to treatment failure was statistically significant in favor of THC/CBD spray, with 57 % of patients receiving placebo failing treatment versus 24 % of patients from the THC/CBD spray group (p = 0.04). The mean change from baseline in Pain Numerical Rating Scale (NRS) (p = 0.028) and sleep quality NRS (p = 0.015) scores, both secondary endpoints in phase B, were also statistically significant compared to placebo, with estimated treatment differences of -0.79 and 0.99 points, respectively, in favor of THC/CBD spray treatment. The results of the current investigation were equivocal, with conflicting findings in the two phases of the study. While there were a large proportion of responders to THC/CBD spray treatment during the phase A double-blind period, the primary endpoint was not met due to a similarly large number of placebo responders. In contrast, there was a marked effect in phase B of the study, with an increased time to treatment failure in the THC/CBD spray group compared to placebo. These findings suggest that further studies are required to explore the full potential of THC/CBD spray in these patients.
Li, J; Guo, L-X; Zeng, H; Han, X-B
2009-06-01
A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
NASA Astrophysics Data System (ADS)
Colojoara, Carmen; Gabay, Shimon; van der Meulen, Freerk W.; van Gemert, Martin J. C.; Miron, Mariana I.; Mavrantoni, Androniki
1997-12-01
Dentin hypersensitivity is considered to be a consequence of the presence of open dentin tubules on the exposed dentin surface. Various methods and materials used in the treatment of this disease are directed to achieve a tubule's occlusion. The purpose of this study was to evaluate under scanning electron microscopy and clinical method the sealing effects of CO2 laser on dentin tubules of human teeth without any damages of the surrounding tissues. Samples of freshly extracted noncarious 3rd molars were used. The teeth were randomly divided in to two groups A and B. The samples of group A were exposed to laser beam in cervical area, directed parallel to their dentin tubules. The teeth of group B were sectioned through a hypothetical carious lesion and lased perpendicularly or obliquely of the dentin tubules. The CO2 laser, at 10.6 micrometers wavelength, was operated only in pulse mode and provided 6.25 - 350 mJ in a burst of 25 pulses each of 250 microsecond(s) time duration with a 2 ms time interval between successive pulses (repetition rate up to 500 mH). Melting of dentin surface and partial closure of exposed dentin tubules were found for all specimens at 6.25 to 31.25 mJ energy. Our results indicated that using CO2 laser in a parallel orientation of laser beam with dentin tubules, the dentin sensitivity can be reduced without any damages of pulp vitality.
Heron, Stuart R; Woby, Steve R; Thompson, Dave P
2017-06-01
To assess the efficacy of three different exercise programmes in treating rotator cuff tendinopathy/shoulder impingement syndrome. Parallel group randomised clinical trial. Two out-patient NHS physiotherapy departments in Manchester, United Kingdom. 120 patients with shoulder pain of at least three months duration. Pain was reproduced on stressing the rotator cuff and participants had full passive range of movement at the shoulder. Three dynamic rotator cuff loading programmes; open chain resisted band exercises (OC) closed chain exercises (CC) and minimally loaded range of movement exercises (ROM). Change in Shoulder Pain and Disability Index (SPADI) score and the proportion of patients making a Minimally Clinically Important Change (MCIC) in symptoms 6 weeks after commencing treatment. All three programmes resulted in significant decreases in SPADI score, however there were no significant differences between the groups. Participants making a MCIC in symptoms were similar across all groups, however more participants deteriorated in the ROM group. Dropout rate was higher in the CC group, but when only patients completing treatment were considered more patients in the CC group made a meaningful reduction in pain and disability. Open chain, closed chain and range of movement exercises all seem to be effective in bringing about short term changes in pain and disability in patients with rotator cuff tendinopathy. ISRCTN76701121. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Determination of Algorithm Parallelism in NP Complete Problems for Distributed Architectures
1990-03-05
12 structure STACK declare OpenStack (S-.NODE **TopPtr) -+TopPtrI FlushStack(S.-NODE **TopPtr) -*TopPtr PushOnStack(S-.NODE **TopPtr, ITEM *NewltemPtr...OfCoveringSets, CoveringSets, L, Best CoverTime, Vertex, Set3end SCND ADT B.26 structure STACKI declare OpenStack (S-NODE **TopPtr) -+TopPtr FlushStack(S
Solid state pulsed power generator
Tao, Fengfeng; Saddoughi, Seyed Gholamali; Herbon, John Thomas
2014-02-11
A power generator includes one or more full bridge inverter modules coupled to a semiconductor opening switch (SOS) through an inductive resonant branch. Each module includes a plurality of switches that are switched in a fashion causing the one or more full bridge inverter modules to drive the semiconductor opening switch SOS through the resonant circuit to generate pulses to a load connected in parallel with the SOS.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
Stocks, Jennifer Dugan; Taneja, Baldeo K; Baroldi, Paolo; Findling, Robert L
2012-04-01
To evaluate safety and tolerability of four doses of immediate-release molindone hydrochloride in children with attention-deficit/hyperactivity disorder (ADHD) and serious conduct problems. This open-label, parallel-group, dose-ranging, multicenter trial randomized children, aged 6-12 years, with ADHD and persistent, serious conduct problems to receive oral molindone thrice daily for 9-12 weeks in four treatment groups: Group 1-10 mg (5 mg if weight <30 kg), group 2-20 mg (10 mg if <30 kg), group 3-30 mg (15 mg if <30 kg), and group 4-40 mg (20 mg if <30 kg). The primary outcome measure was to evaluate safety and tolerability of molindone in children with ADHD and serious conduct problems. Secondary outcome measures included change in Nisonger Child Behavior Rating Form-Typical Intelligence Quotient (NCBRF-TIQ) Conduct Problem subscale scores, change in Clinical Global Impressions-Severity (CGI-S) and -Improvement (CGI-I) subscale scores from baseline to end point, and Swanson, Nolan, and Pelham rating scale-revised (SNAP-IV) ADHD-related subscale scores. The study randomized 78 children; 55 completed the study. Treatment with molindone was generally well tolerated, with no clinically meaningful changes in laboratory or physical examination findings. The most common treatment-related adverse events (AEs) included somnolence (n=9), weight increase (n=8), akathisia (n=4), sedation (n=4), and abdominal pain (n=4). Mean weight increased by 0.54 kg, and mean body mass index by 0.24 kg/m(2). The incidence of AEs and treatment-related AEs increased with increasing dose. NCBRF-TIQ subscale scores improved in all four treatment groups, with 34%, 34%, 32%, and 55% decreases from baseline in groups 1, 2, 3, and 4, respectively. CGI-S and SNAP-IV scores improved over time in all treatment groups, and CGI-I scores improved to the greatest degree in group 4. Molindone at doses of 5-20 mg/day (children weighing <30 kg) and 20-40 mg (≥ 30 kg) was well tolerated, and preliminary efficacy results suggest that molindone produces dose-related behavioral improvements over 9-12 weeks. Additional double-blind, placebo-controlled trials are needed to further investigate molindone in this pediatric population.
ERIC Educational Resources Information Center
Wilens, Timothy E.; Gault, Laura M.; Childress, Ann; Kratochvil, Christopher J.; Bensman, Lindsey; Hall, Coleen M.; Olson, Evelyn; Robieson, Weining Z.; Garimella, Tushar S.; Abi-Saab, Walid M.; Apostol, George; Saltarelli, Mario D.
2011-01-01
Objective: To assess the safety and efficacy of ABT-089, a novel alpha[subscript 4]beta[subscript 2] neuronal nicotinic receptor partial agonist, vs. placebo in children with attention-deficit/hyperactivity disorder (ADHD). Method: Two multicenter, randomized, double-blind, placebo-controlled, parallel-group studies of children 6 through 12 years…
ERIC Educational Resources Information Center
Szobot, C. M.; Ketzer, C.; Parente, M. A.; Biederman, J.; Rohde, L. A.
2004-01-01
Objective: To evaluate the acute efficacy of methylphenidate (MPH) in Brazilian male children and adolescents with ADHD. Method: In a 4-day, double-blind, placebo-controlled, randomized, fix dose escalating, parallel-group trial, 36 ADHD children and adolescents were allocated to two groups: MPH (n = 19) and placebo (n = 17). Participants were…
Isiordia-Espinoza, M-A; Pozos-Guillen, A; Martinez-Rider, R; Perez-Urizar, J
2016-09-01
Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. According to the VAS and UAC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery.
Averaging in SU(2) open quantum random walk
NASA Astrophysics Data System (ADS)
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
A Multisite Cluster Randomized Field Trial of Open Court Reading
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Dowling, N. Maritza; Schneck, Carrie
2008-01-01
In this article, the authors report achievement outcomes of a multisite cluster randomized field trial of Open Court Reading 2005 (OCR), a K-6 literacy curriculum published by SRA/McGraw-Hill. The participants are 49 first-grade through fifth-grade classrooms from predominantly minority and poor contexts across the nation. Blocking by grade level…
DNA Assembly with De Bruijn Graphs Using an FPGA Platform.
Poirier, Carl; Gosselin, Benoit; Fortier, Paul
2018-01-01
This paper presents an FPGA implementation of a DNA assembly algorithm, called Ray, initially developed to run on parallel CPUs. The OpenCL language is used and the focus is placed on modifying and optimizing the original algorithm to better suit the new parallelization tool and the radically different hardware architecture. The results show that the execution time is roughly one fourth that of the CPU and factoring energy consumption yields a tenfold savings.
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
NASA Astrophysics Data System (ADS)
Endo, M.; Hori, T.; Koyama, K.; Yamaguchi, I.; Arai, K.; Kaiho, K.; Yanabu, S.
2008-02-01
Using a high temperature superconductor, we constructed and tested a model Superconducting Fault Current Limiter (SFCL). SFCL which has a vacuum interrupter with electromagnetic repulsion mechanism. We set out to construct high voltage class SFCL. We produced the electromagnetic repulsion switch equipped with a 24kV vacuum interrupter(VI). There are problems that opening speed becomes late. Because the larger vacuum interrupter the heavier weight of its contact. For this reason, the current which flows in a superconductor may be unable to be interrupted within a half cycles of current. In order to solve this problem, it is necessary to change the design of the coil connected in parallel and to strengthen the electromagnetic repulsion force at the time of opening the vacuum interrupter. Then, the design of the coil was changed, and in order to examine whether the problem is solvable, the current limiting test was conducted. We examined current limiting test using 4 series and 2 parallel-connected YBCO thin films. We used 12-centimeter-long YBCO thin film. The parallel resistance (0.1Ω) is connected with each YBCO thin film. As a result, we succeed in interrupting the current of superconductor within a half cycle of it. Furthermore, series and parallel-connected YBCO thin film could limit without failure.
Efficient Scalable Median Filtering Using Histogram-Based Operations.
Green, Oded
2018-05-01
Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.
Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes
Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey
2010-10-05
In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.
Effect of Trelagliptin on Quality of Life in Patients with Type 2 Diabetes Mellitus: Study Protocol.
Ishii, Hitoshi; Suzaki, Yuki; Miyata, Yuko
2017-12-01
Long-term glycemic control in type 2 diabetes is critical to prevent or delay the onset of macrovascular and microvascular complications. Medication adherence is an integral component of type 2 diabetes management. Minimizing the dosing frequency of antidiabetic drugs may reduce treatment burden for patients and improve medication adherence. This study has been proposed to assess the reduction in treatment burden during 12 weeks' administration of trelagliptin, a weekly dosing dipeptidyl peptidase-4 (DPP-4) inhibitor, compared with a daily dosing DPP-4 inhibitor in patients with type 2 diabetes. This is a multicenter, randomized, open-label, parallel-group, comparative study to be conducted at approximately 15 sites across Japan. A total of 240 patients are to be randomized 1:1 to receive trelagliptin or a daily DPP-4 inhibitor for 12 weeks. Efficacy and safety will be compared between the two groups. The primary endpoint is the change in total score for all items of the diabetes-therapy-related QOL questionnaire from treatment start to treatment end. The study will be conducted with the highest respect for the individual participants in accordance with the protocol, the Declaration of Helsinki, the Ethical Guidelines for Clinical Research, the ICH Consolidated Guideline for Good Clinical Practice, and applicable local laws and regulations. Takeda Pharmaceutical Company Limited. Japic CTI-173482.
Plenty, Stephanie; Bejerot, Susanne
2014-01-01
Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This preliminary randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive behavioural therapy and recreational activity. Both interventions comprised 36 weekly 3-h sessions led by two therapists in groups of 6–8 patients. A total of 68 psychiatric patients with autism spectrum disorders participated in the study. Outcome measures were Quality of Life Inventory, Sense of Coherence Scale, Rosenberg Self-Esteem Scale and an exploratory analysis on measures of psychiatric health. Participants in both treatment conditions reported an increased quality of life at post-treatment (d = 0.39, p < 0.001), with no difference between interventions. No amelioration of psychiatric symptoms was observed. The dropout rate was lower with cognitive behavioural therapy than with recreational activity, and participants in cognitive behavioural therapy rated themselves as more generally improved, as well as more improved regarding expression of needs and understanding of difficulties. Both interventions appear to be promising treatment options for adults with autism spectrum disorder. The interventions’ similar efficacy may be due to the common elements, structure and group setting. Cognitive behavioural therapy may be additionally beneficial in terms of increasing specific skills and minimizing dropout. PMID:24089423
[Compliancy of pre-exposure prophylaxis for HIV infection in men who have sex with men in Chengdu].
Xu, J Y; Mou, Y C; Ma, Y L; Zhang, J Y
2017-05-10
Objective: To evaluate the compliancy of HIV pre-exposure prophylaxis (PrEP) in men who have sex with men (MSM) in Chengdu, Sichuan province, and explore the influencing factors. Methods: From 1 July 2013 to 30 September 2015, a random, open, multi-center and parallel control intervention study was conducted in 328 MSM enrolled by non-probability sampling in Chengdu. The MSM were divided into 3 groups randomly, i.e. daily group, intermittent group (before and after exposure) and control group. Clinical follow-up and questionnaire survey were carried out every 3 months. Their PrEP compliances were evaluated respectively and multivariate logistic regression analysis was conducted to identify the related factors. Results: A total of 141 MSM were surveyed, in whom 59(41.8 % ) had good PrEP compliancy. The PrEP compliancy rate was 69.0 % in daily group, higher than that in intermittent group (14.3 % ), the difference had significance ( χ (2)=45.29, P <0.001). Multivariate logistic analysis indicated that type of PrEP was the influencing factors of PrEP compliancy. Compared with daily group, the intermittent group had worse PrEP compliancy ( OR =0.07, 95 %CI : 0.03-0.16). Conclusion: The PrEP compliance of the MSM in this study was poor, the compliancy would be influenced by the type of PrEP.
Treatment of type 2 diabetes with a combination regimen of repaglinide plus pioglitazone.
Jovanovic, Lois; Hassman, David R; Gooch, Brent; Jain, Rajeev; Greco, Susan; Khutoryansky, Naum; Hale, Paula M
2004-02-01
The efficacy and safety of combination therapy (repaglinide plus pioglitazone) was compared to repaglinide or pioglitazone in 24-week treatment of type 2 diabetes. This randomized, multicenter, open-label, parallel-group study enrolled 246 adults (age 24-85) who had shown inadequate response in previous sulfonylurea or metformin monotherapy (HbA(1c) > 7%). Prior therapy was withdrawn for 2 weeks, followed by randomization to repaglinide, pioglitazone, or repaglinide/pioglitazone. In the first 12 weeks of treatment, repaglinide doses were optimized, followed by 12 weeks of maintenance therapy. Pioglitazone dosage was fixed at 30 mg per day. Baseline HbA(1c) values were comparable (9.0% for repaglinide, 9.1% for pioglitazone, 9.3% for combination). Mean changes in HbA(1c) values at the end of treatment were -1.76% for repaglinide/pioglitazone, -0.18% for repaglinide, +0.32% for pioglitazone. Fasting plasma glucose reductions were -82 mg/dl for combination therapy, -34 mg/dl for repaglinide, -18 mg/dl for pioglitazone. Minor hypoglycemia occurred in 5% of patients for the combination, 8% for repaglinide, and 3% for pioglitazone. Weight gains for combination therapy were correlated to individual HbA(1c) reductions. In summary, for patients who had previously failed oral antidiabetic monotherapy, the combination repaglinide/pioglitazone had acceptable safety, with greater reductions of glycemic parameters than therapy using either agent alone.
Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne
2014-08-01
Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This preliminary randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive behavioural therapy and recreational activity. Both interventions comprised 36 weekly 3-h sessions led by two therapists in groups of 6-8 patients. A total of 68 psychiatric patients with autism spectrum disorders participated in the study. Outcome measures were Quality of Life Inventory, Sense of Coherence Scale, Rosenberg Self-Esteem Scale and an exploratory analysis on measures of psychiatric health. Participants in both treatment conditions reported an increased quality of life at post-treatment (d = 0.39, p < 0.001), with no difference between interventions. No amelioration of psychiatric symptoms was observed. The dropout rate was lower with cognitive behavioural therapy than with recreational activity, and participants in cognitive behavioural therapy rated themselves as more generally improved, as well as more improved regarding expression of needs and understanding of difficulties. Both interventions appear to be promising treatment options for adults with autism spectrum disorder. The interventions' similar efficacy may be due to the common elements, structure and group setting. Cognitive behavioural therapy may be additionally beneficial in terms of increasing specific skills and minimizing dropout. © The Author(s) 2013.
Pirard, Céline; Loumaye, Ernest; Wyns, Christine
2015-01-01
Background. The aim of this pilot study was to evaluate intranasal buserelin for luteal phase support and compare its efficacy with standard vaginal progesterone in IVF/ICSI antagonist cycles. Methods. This is a prospective, randomized, open, parallel group study. Forty patients underwent ovarian hyperstimulation with human menopausal gonadotropin under pituitary inhibition with gonadotropin-releasing hormone antagonist, while ovulation trigger and luteal support were achieved using intranasal GnRH agonist (group A). Twenty patients had their cycle downregulated with buserelin and stimulated with hMG, while ovulation trigger was achieved using 10,000 IU human chorionic gonadotropin with luteal support by intravaginal progesterone (group B). Results. No difference was observed in estradiol levels. Progesterone levels on day 5 were significantly lower in group A. However, significantly higher levels of luteinizing hormone were observed in group A during the entire luteal phase. Pregnancy rates (31.4% versus 22.2%), implantation rates (22% versus 15.4%), and clinical pregnancy rates (25.7% versus 16.7%) were not statistically different between groups, although a trend towards higher rates was observed in group A. No luteal phase lasting less than 10 days was recorded in either group. Conclusion. Intranasal administration of buserelin is effective for providing luteal phase support in IVF/ICSI antagonist protocols. PMID:25945092
Pagel, Judith-Irina; Hulde, Nikolai; Kammerer, Tobias; Schwarz, Michaela; Chappell, Daniel; Burges, Alexander; Hofmann-Kiefer, Klaus; Rehm, Markus
2017-07-10
This study aims to investigate the effects of a modified, balanced crystalloid including phosphate in a perioperative setting in order to maintain a stable electrolyte and acid-base homeostasis in the patient. This is a single-centre, open-label, randomized controlled trial involving two parallel groups of female patients comparing a perioperative infusion regime with sodium glycerophosphate and Jonosteril® (treatment group) or Jonosteril® (comparator) alone. The primary endpoint is to maintain a stable concentration of weak acids [A - ] according to the Stewart approach of acid-base balance. Secondary endpoints are measurement of serum phosphate levels, other acid-base parameters such as the strong ion difference (SID), the onset and severity of postoperative nausea and vomiting (PONV), electrolyte levels and their excretion in the urine, monitoring of renal function and glycocalyx components, haemodynamics, amounts of catecholamines and other vasopressors used and the safety of the infusion regime. Perioperative fluid replacement with the use of currently available crystalloid preparations still fail to maintain a stable acid-base balance and experts agree that common balanced solutions are still not ideal. This study aims to investigate the effectivity and safety of a new crystalloid solution by adding sodium glycerophosphate to a standardized crystalloid preparation in order to maintain a balanced perioperative acid-base homeostasis. EudraCT number 201002422520 . Registered on 30 November 2010.
Sohn, Hoon-Sang; Jeon, Yoon Sang; Lee, JuHan; Shin, Sang-Jin
2017-06-01
Recently, minimally invasive plate osteosynthesis (MIPO) has been widely used for the treatment of proximal humeral fractures. However, there is concern about whether the MIPO in comminuted proximal humeral fractures is also comparable to open plating. The purpose of this study was to compare the clinical and radiographic outcomes of open plating and MIPO for acute displaced proximal humeral fractures. In this prospective, randomized controlled study, 107 patients who had an acute proximal humeral fracture were randomized to either the open plating or MIPO techniques. Forty-five patients treated with open plating and 45 with the MIPO technique who were followed up at least 1year were evaluated. Shoulder functional assessment, operating time, several radiographic parameters, and complications were evaluated at final follow-up. The mean follow-up period was 15.0 months in the open plating and 14.3 months in the MIPO technique. There were no statistically significant differences in functional assessment scores and radiographic parameters between the two groups. High complications rates were found in 4-part fracture in both surgical methods The average operation time in the MIPO group were significantly lower compared to the open plating group (p<0.05). This study showed MIPO in proximal humerus fractures had similar clinical and radiographic outcomes compared to the open plating. However, the MIPO technique in proximal humerus fracture provided significantly shorter operation time than the open plating. Copyright © 2017 Elsevier Ltd. All rights reserved.
Systems and methods for photovoltaic string protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krein, Philip T.; Kim, Katherine A.; Pilawa-Podgurski, Robert C. N.
A system and method includes a circuit for protecting a photovoltaic string. A bypass switch connects in parallel to the photovoltaic string and a hot spot protection switch connects in series with the photovoltaic string. A first control signal controls opening and closing of the bypass switch and a second control signal controls opening and closing of the hot spot protection switch. Upon detection of a hot spot condition the first control signal closes the bypass switch and after the bypass switch is closed the second control signal opens the hot spot protection switch.
Brainhack: a collaborative workshop for the open neuroscience community.
Cameron Craddock, R; S Margulies, Daniel; Bellec, Pierre; Nolan Nichols, B; Alcauter, Sarael; A Barrios, Fernando; Burnod, Yves; J Cannistraci, Christopher; Cohen-Adad, Julien; De Leener, Benjamin; Dery, Sebastien; Downar, Jonathan; Dunlop, Katharine; R Franco, Alexandre; Seligman Froehlich, Caroline; J Gerber, Andrew; S Ghosh, Satrajit; J Grabowski, Thomas; Hill, Sean; Sólon Heinsfeld, Anibal; Matthew Hutchison, R; Kundu, Prantik; R Laird, Angela; Liew, Sook-Lei; J Lurie, Daniel; G McLaren, Donald; Meneguzzi, Felipe; Mennes, Maarten; Mesmoudi, Salma; O'Connor, David; H Pasaye, Erick; Peltier, Scott; Poline, Jean-Baptiste; Prasad, Gautam; Fraga Pereira, Ramon; Quirion, Pierre-Olivier; Rokem, Ariel; S Saad, Ziad; Shi, Yonggang; C Strother, Stephen; Toro, Roberto; Q Uddin, Lucina; D Van Horn, John; W Van Meter, John; C Welsh, Robert; Xu, Ting
2016-01-01
Brainhack events offer a novel workshop format with participant-generated content that caters to the rapidly growing open neuroscience community. Including components from hackathons and unconferences, as well as parallel educational sessions, Brainhack fosters novel collaborations around the interests of its attendees. Here we provide an overview of its structure, past events, and example projects. Additionally, we outline current innovations such as regional events and post-conference publications. Through introducing Brainhack to the wider neuroscience community, we hope to provide a unique conference format that promotes the features of collaborative, open science.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Klein, Max; Sharma, Rati; Bohrer, Chris H; Avelis, Cameron M; Roberts, Elijah
2017-01-15
Data-parallel programming techniques can dramatically decrease the time needed to analyze large datasets. While these methods have provided significant improvements for sequencing-based analyses, other areas of biological informatics have not yet adopted them. Here, we introduce Biospark, a new framework for performing data-parallel analysis on large numerical datasets. Biospark builds upon the open source Hadoop and Spark projects, bringing domain-specific features for biology. Source code is licensed under the Apache 2.0 open source license and is available at the project website: https://www.assembla.com/spaces/roberts-lab-public/wiki/Biospark CONTACT: eroberts@jhu.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Structure and function of SemiSWEET and SWEET sugar transporters.
Feng, Liang; Frommer, Wolf B
2015-08-01
SemiSWEETs and SWEETs have emerged as unique sugar transporters. First discovered in plants with the help of fluorescent biosensors, homologs exist in all kingdoms of life. Bacterial and plant homologs transport hexoses and sucrose, whereas animal SWEETs transport glucose. Prokaryotic SemiSWEETs are small and comprise a parallel homodimer of an approximately 100 amino acid-long triple helix bundle (THB). Duplicated THBs are fused to create eukaryotic SWEETs in a parallel orientation via an inversion linker helix, producing a similar configuration to that of SemiSWEET dimers. Structures of four SemiSWEETs have been resolved in three states: open outside, occluded, and open inside, indicating alternating access. As we discuss here, these atomic structures provide a basis for exploring the evolution of structure-function relations in this new class of transporters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Use of multi-opening burrow systems by black-footed ferrets
Biggins, Dean E.
2012-01-01
Multi-opening burrow systems constructed by prairie dogs (Cynomys) ostensibly provide escape routes when prairie dogs are pursued by predators capable of entering the burrows, such as black-footed ferrets (Mustela nigripes), or by predators that can rapidly dig into the tunnels, such as American badgers (Taxidea taxus). Because badgers also prey on ferrets, ferrets might similarly benefit from multi-opening burrow systems. Using an air blower, white-tailed prairie dog (Cynomys leucurus) burrow openings were tested for connectivity on plots occupied by black-footed ferrets and on randomly selected plots in Wyoming. Significantly more connected openings were found on ferret-occupied plots than on random plots. Connected openings might be due to modifications by ferrets in response to plugging by prairie dogs, due to selection by ferrets for complex systems with multiple openings that are already unobstructed, or simply due to ferrets lingering at kill sites that were multi-opening systems selected by their prairie dog prey.
Full-f version of GENE for turbulence in open-field-line systems
NASA Astrophysics Data System (ADS)
Pan, Q.; Told, D.; Shi, E. L.; Hammett, G. W.; Jenko, F.
2018-06-01
Unique properties of plasmas in the tokamak edge, such as large amplitude fluctuations and plasma-wall interactions in the open-field-line regions, require major modifications of existing gyrokinetic codes originally designed for simulating core turbulence. To this end, the global version of the 3D2V gyrokinetic code GENE, so far employing a δf-splitting technique, is extended to simulate electrostatic turbulence in straight open-field-line systems. The major extensions are the inclusion of the velocity-space nonlinearity, the development of a conducting-sheath boundary, and the implementation of the Lenard-Bernstein collision operator. With these developments, the code can be run as a full-f code and can handle particle loss to and reflection from the wall. The extended code is applied to modeling turbulence in the Large Plasma Device (LAPD), with a reduced mass ratio and a much lower collisionality. Similar to turbulence in a tokamak scrape-off layer, LAPD turbulence involves collisions, parallel streaming, cross-field turbulent transport with steep profiles, and particle loss at the parallel boundary.
Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldiwany, B.; Alvarez, P.D.; Wolfe, K.
1996-12-01
An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging andmore » unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during {Delta}P closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data.« less
Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.
Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
A hybrid parallel framework for the cellular Potts model simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yi; He, Kejing; Dong, Shoubin
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less
NASA Astrophysics Data System (ADS)
Grzeszczuk, A.; Kowalski, S.
2015-04-01
Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
Electromagnetic Physics Models for Parallel Computing Architectures
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-10-01
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
NASA Astrophysics Data System (ADS)
Xue, Xiaofeng
2016-12-01
In this paper we are concerned with the contact process with random recovery rates on open clusters of bond percolation on Z^d. Let ξ be a random variable such that P(ξ ≥ 1)=1, which ensures E1/ξ <+∞, then we assign i. i. d. copies of ξ on the vertices as the random recovery rates. Assuming that each edge is open with probability p and the infection can only spread through the open edges, then we obtain that limsup _{d→ +∞}λ _d≤ λ _c=1/pE{1}/{ξ}, where λ _d is the critical value of the process on Z^d, i.e., the maximum of the infection rates with which the infection dies out with probability one when only the origin is infected at t=0. To prove the above main result, we show that the following phase transition occurs. Assuming that lceil log drceil vertices are infected at t=0, where these vertices can be located anywhere, then when the infection rate λ >λ _c, the process survives with high probability as d→ +∞ while when λ <λ _c, the process dies out at time O(log d) with high probability.
2007-05-27
5:00PM - Opening Reception 6:30PM WEDNESDAY, MAY 23, 2007 (General Session) TUESDAY, MAY 22, 2007 (Early Registration) 11:50 AM - LUNCHEON 1:00 PM... RECEPTION 7:00 PM WEDNESDAY, MAY 23, 2007 (Parallel Sessions) Session IVA - OPEN SESSION Chair: Mr. Telly Manolatos, Electronics Development Corporation...company’s logo in the agenda handouts and proceedings o Signage outside the particular event sponsored o Sponsor ribbon on badges • Reception - $8,000
The generalized accessibility and spectral gap of lower hybrid waves in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Hironori
1994-03-01
The generalized accessibility of lower hybrid waves, primarily in the current drive regime of tokamak plasmas, which may include shifting, either upward or downward, of the parallel refractive index (n{sub {parallel}}), is investigated, based upon a cold plasma dispersion relation and various geometrical constraint (G.C.) relations imposed on the behavior of n{sub {parallel}}. It is shown that n{sub {parallel}} upshifting can be bounded and insufficient to bridge a large spectral gap to cause wave damping, depending upon whether the G.C. relation allows the oblique resonance to occur. The traditional n{sub {parallel}} upshifting mechanism caused by the pitch angle of magneticmore » field lines is shown to lead to contradictions with experimental observations. An upshifting mechanism brought about by the density gradient along field lines is proposed, which is not inconsistent with experimental observations, and provides plausible explanations to some unresolved issues of lower hybrid wave theory, including generation of {open_quote}seed electrons.{close_quote}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique.more » We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.« less
Constructing high complexity synthetic libraries of long ORFs using in vitro selection
NASA Technical Reports Server (NTRS)
Cho, G.; Keefe, A. D.; Liu, R.; Wilson, D. S.; Szostak, J. W.
2000-01-01
We present a method that can significantly increase the complexity of protein libraries used for in vitro or in vivo protein selection experiments. Protein libraries are often encoded by chemically synthesized DNA, in which part of the open reading frame is randomized. There are, however, major obstacles associated with the chemical synthesis of long open reading frames, especially those containing random segments. Insertions and deletions that occur during chemical synthesis cause frameshifts, and stop codons in the random region will cause premature termination. These problems can together greatly reduce the number of full-length synthetic genes in the library. We describe a strategy in which smaller segments of the synthetic open reading frame are selected in vitro using mRNA display for the absence of frameshifts and stop codons. These smaller segments are then ligated together to form combinatorial libraries of long uninterrupted open reading frames. This process can increase the number of full-length open reading frames in libraries by up to two orders of magnitude, resulting in protein libraries with complexities of greater than 10(13). We have used this methodology to generate three types of displayed protein library: a completely random sequence library, a library of concatemerized oligopeptide cassettes with a propensity for forming amphipathic alpha-helical or beta-strand structures, and a library based on one of the most common enzymatic scaffolds, the alpha/beta (TIM) barrel. Copyright 2000 Academic Press.
2014-05-01
fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and
Microfabricated linear Paul-Straubel ion trap
Mangan, Michael A [Albuquerque, NM; Blain, Matthew G [Albuquerque, NM; Tigges, Chris P [Albuquerque, NM; Linker, Kevin L [Albuquerque, NM
2011-04-19
An array of microfabricated linear Paul-Straubel ion traps can be used for mass spectrometric applications. Each ion trap comprises two parallel inner RF electrodes and two parallel outer DC control electrodes symmetric about a central trap axis and suspended over an opening in a substrate. Neighboring ion traps in the array can share a common outer DC control electrode. The ions confined transversely by an RF quadrupole electric field potential well on the ion trap axis. The array can trap a wide array of ions.
Performance comparison analysis library communication cluster system using merge sort
NASA Astrophysics Data System (ADS)
Wulandari, D. A. R.; Ramadhan, M. E.
2018-04-01
Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.
Code of Federal Regulations, 2010 CFR
2010-10-01
....137 Cargo ports. (a) Unless otherwise authorized by the Commandant, the lower edge of any opening for... is drawn parallel to the freeboard deck at side and has as its lowest point the upper edge of the...
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Parallel Execution of Functional Mock-up Units in Buildings Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan
2016-06-30
A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less
NASA Astrophysics Data System (ADS)
Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.
2017-07-01
Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations
Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul
2016-01-01
Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933
Knorr, Ulla; Vinberg, Maj; Mortensen, Erik Lykke; Winkel, Per; Gluud, Christian; Wetterslev, Jørn; Gether, Ulrik; Kessing, Lars Vedel
2012-01-01
The serotonergic neurotransmitter system is closely linked to depression and personality traits. It is not known if selective serotonin reuptake inhibitors (SSRI) have an effect on neuroticism that is independent of their effect on depression. Healthy individuals with a genetic liability for depression represent a group of particular interest when investigating if intervention with SSRIs affects personality. The present trial is the first to test the hypothesis that escitalopram may reduce neuroticism in healthy first-degree relatives of patients with major depressive disorder (MD). The trial used a randomized, blinded, placebo-controlled parallel-group design. We examined the effect of four weeks escitalopram 10 mg daily versus matching placebo on personality in 80 people who had a biological parent or sibling with a history of MD. The outcome measure on personality traits was change in self-reported neuroticism scores on the Revised Neuroticism-Extroversion-Openness-Personality Inventory (NEO-PI-R) and the Eysenck Personality Inventory (EPQ) from entry until end of four weeks of intervention. When compared with placebo, escitalopram did not significantly affect self-reported NEO-PI-R and EPQ neuroticism and extroversion, EPQ psychoticism, NEO-PI-R openness, or NEO-PI-R conscientiousness (p all above 0.05). However, escitalopram increased NEO-PI-R agreeableness scores significantly compared with placebo (mean; SD) (2.38; 8.09) versus (-1.32; 7.94), p = 0.046), but not following correction for multiplicity. A trend was shown for increased conscientiousness (p = 0.07). There was no significant effect on subclinical depressive symptoms (p = 0.6). In healthy first-degree relatives of patients with MD, there is no effect of escitalopram on neuroticism, but it is possible that escitalopram may increase the personality traits of agreeableness and conscientiousness. Clinicaltrials.gov NCT00386841.
Xu, Haiyan; Gopal, Srihari; Nuamah, Isaac; Ravenstijn, Paulien; Janik, Adam; Schotte, Alain; Hough, David; Fleischhacker, Wolfgang W.
2016-01-01
Background: This double-blind, parallel-group, multicenter, phase-3 study was designed to test the noninferiority of paliperidone palmitate 3-month formulation (PP3M) to the currently marketed 1-month formulation (PP1M) in patients (age 18–70 years) with schizophrenia, previously stabilized on PP1M. Methods: After screening (≤3 weeks) and a 17-week, flexible-dosed, open-label phase (PP1M: day 1 [150mg eq. deltoid], day 8 [100mg eq. deltoid.], weeks 5, 9, and 13 [50, 75, 100, or 150mg eq., deltoid/gluteal]), clinically stable patients were randomized (1:1) to PP3M (fixed-dose, 175, 263, 350, or 525mg eq. deltoid/gluteal) or PP1M (fixed-dose, 50, 75, 100, or 150mg eq. deltoid/gluteal) for a 48-week double-blind phase. Results: Overall, 1016/1429 open-label patients entered the double-blind phase (PP3M: n=504; PP1M: n=512) and 842 completed it (including patients with relapse). PP3M was noninferior to PP1M: relapse rates were similar in both groups (PP3M: n=37, 8%; PP1M: n=45, 9%; difference in relapse-free rate: 1.2% [95% CI:-2.7%; 5.1%]) based on Kaplan-Meier estimates (primary efficacy). Secondary endpoint results (changes from double-blind baseline in positive and negative symptom score total and subscale scores, Clinical Global Impression-Severity, and Personal and Social Performance scores) were consistent with primary endpoint results. No clinically relevant differences were observed in pharmacokinetic exposures between PP3M and PP1M. Both groups had similar tolerability profiles; increased weight was the most common treatment-emergent adverse event (double-blind phase; 21% each). No new safety signals were detected. Conclusion: Taken together, PP3M with its 3-month dosing interval is a unique option for relapse prevention in schizophrenia. PMID:26902950
Haziza, Christelle; Weitkunat, Rolf; Magnette, John
2016-01-01
Introduction: Tobacco harm reduction aims to provide reduced risk alternatives to adult smokers who would otherwise continue smoking combustible cigarettes (CCs). This randomized, open-label, three-arm, parallel-group, single-center, short-term confinement study aimed to investigate the effects of exposure to selected harmful and potentially harmful constituents (HPHCs) of cigarette smoke in adult smokers who switched to a carbon-heated tobacco product (CHTP) compared with adult smokers who continued to smoke CCs and those who abstained from smoking for 5 days. Methods: Biomarkers of exposure to HPHCs, including nicotine and urinary excretion of mutagenic material, were measured in 24-hour urine and blood samples in 112 male and female Caucasian smokers switching from CCs to the CHTP ad libitum use. Puffing topography was assessed during product use. Results: Switching to the CHTP or smoking abstinence (SA) resulted in marked decreases from baseline to Day 5 in all biomarkers of exposure measured, including carboxyhemoglobin (43% and 55% decrease in the CHTP and SA groups, respectively). The urinary excretion of mutagenic material was also markedly decreased on Day 5 compared with baseline (89% and 87% decrease in the CHTP and SA groups, respectively). No changes in biomarkers of exposure to HPHCs or urinary mutagenic material were observed between baseline and Day 5 in the CC group. Conclusions: Our results provide clear evidence supporting a reduction in the level of exposure to HPHCs of tobacco smoke in smokers who switch to CHTP under controlled conditions, similar to that observed in SA. Implications: The reductions observed in biomarkers of exposure to HPHCs of tobacco smoke in this short-term study could potentially also reduce the incidence of cancer, cardiovascular and respiratory diseases in those smokers who switch to a heated tobacco product. PMID:26817490
2014-01-01
Background The Portuguese National Health Directorate has issued clinical practice guidelines on prescription of anti-inflammatory drugs, acid suppressive therapy, and antiplatelets. However, their effectiveness in changing actual practice is unknown. Methods The study will compare the effectiveness of educational outreach visits regarding the improvement of compliance with clinical guidelines in primary care against usual dissemination strategies. A cost-benefit analysis will also be conducted. We will carry out a parallel, open, superiority, randomized trial directed to primary care physicians. Physicians will be recruited and allocated at a cluster-level (primary care unit) by minimization. Data will be analyzed at the physician level. Primary care units will be eligible if they use electronic prescribing and have at least four physicians willing to participate. Physicians in intervention units will be offered individual educational outreach visits (one for each guideline) at their workplace during a six-month period. Physicians in the control group will be offered a single unrelated group training session. Primary outcomes will be the proportion of cyclooxygenase-2 inhibitors prescribed in the anti-inflammatory class, and the proportion of omeprazole in the proton pump inhibitors class at 18 months post-intervention. Prescription data will be collected from the regional pharmacy claims database. We estimated a sample size of 110 physicians in each group, corresponding to 19 clusters with a mean size of 6 physicians. Outcome collection and data analysis will be blinded to allocation, but due to the nature of the intervention, physicians and detailers cannot be blinded. Discussion This trial will attempt to address unresolved issues in the literature, namely, long term persistence of effect, the importance of sequential visits in an outreach program, and cost issues. If successful, this trial may be the cornerstone for deploying large scale educational outreach programs within the Portuguese National Health Service. Trial registration ClinicalTrials.gov number NCT01984034. PMID:24423370
Pergola, Pablo E; Spiegel, David M; Warren, Suzette; Yuan, Jinwei; Weir, Matthew R
2017-01-01
Patiromer is a sodium-free, nonabsorbed, potassium binder approved for treatment of hyperkalemia. This open-label study compares the efficacy and safety of patiromer administered without food versus with food. Adults with hyperkalemia (potassium ≥5.0 mEq/L) were randomized (1:1) to receive patiromer once daily without food or with food for 4 weeks. The dosage was adjusted (maximum: 25.2 g/day) using a prespecified titration schedule to achieve and maintain potassium within a target range (3.8-5.0 mEq/L). The primary efficacy endpoint was the proportion of patients with serum potassium in the target range at either week 3 or week 4. Safety was assessed by adverse events (AEs) and laboratory testing. Efficacy was evaluated in 112 patients; 65.2% were ≥65 years of age, 75.9% had chronic kidney disease, and 82.1% had diabetes. Baseline mean serum potassium was similar in the without-food (5.44 mEq/L) and with-food (5.34 mEq/L) groups. The primary endpoint was achieved by 87.3% (95% CI 75.5-94.7) and 82.5% (95% CI 70.1-91.3) of patients in the with-food and without-food groups, respectively; least squares mean changes in serum potassium from baseline to week 4 were -0.65 and -0.62 mEq/L, respectively (p < 0.0001). The most common AEs were diarrhea and constipation. Serum K+ remained ≥3.5 mEq/L in all patients; 5 patients developed serum magnesium <1.4 mg/dL, including 4 whose baseline magnesium was below the lower limit of normal. Patiromer is equally effective and well tolerated when taken without food or with food, thereby offering the potential for dosing flexibility. © 2017 The Author(s) Published by S. Karger AG, Basel.
Knorr, Ulla; Vinberg, Maj; Mortensen, Erik Lykke; Winkel, Per; Gluud, Christian; Wetterslev, Jørn; Gether, Ulrik; Kessing, Lars Vedel
2012-01-01
Introduction The serotonergic neurotransmitter system is closely linked to depression and personality traits. It is not known if selective serotonin reuptake inhibitors (SSRI) have an effect on neuroticism that is independent of their effect on depression. Healthy individuals with a genetic liability for depression represent a group of particular interest when investigating if intervention with SSRIs affects personality. The present trial is the first to test the hypothesis that escitalopram may reduce neuroticism in healthy first-degree relatives of patients with major depressive disorder (MD). Methods The trial used a randomized, blinded, placebo-controlled parallel-group design. We examined the effect of four weeks escitalopram 10 mg daily versus matching placebo on personality in 80 people who had a biological parent or sibling with a history of MD. The outcome measure on personality traits was change in self-reported neuroticism scores on the Revised Neuroticism-Extroversion-Openness-Personality Inventory (NEO-PI-R) and the Eysenck Personality Inventory (EPQ) from entry until end of four weeks of intervention. Results When compared with placebo, escitalopram did not significantly affect self-reported NEO-PI-R and EPQ neuroticism and extroversion, EPQ psychoticism, NEO-PI-R openness, or NEO-PI-R conscientiousness (p all above 0.05). However, escitalopram increased NEO-PI-R agreeableness scores significantly compared with placebo (mean; SD) (2.38; 8.09) versus (−1.32; 7.94), p = 0.046), but not following correction for multiplicity. A trend was shown for increased conscientiousness (p = 0.07). There was no significant effect on subclinical depressive symptoms (p = 0.6). Conclusion In healthy first-degree relatives of patients with MD, there is no effect of escitalopram on neuroticism, but it is possible that escitalopram may increase the personality traits of agreeableness and conscientiousness. Trial Registration Clinicaltrials.gov NCT00386841 PMID:22393376
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
NASA Astrophysics Data System (ADS)
Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH
2018-05-01
In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.
Drane, Daniel L.; Loring, David W.; Voets, Natalie L.; Price, Michele; Ojemann, Jeffrey G.; Willie, Jon T.; Saindane, Amit M.; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L.; Miller, John W.; Meador, Kimford J.; Gross, Robert E.
2015-01-01
SUMMARY OBJECTIVES Temporal lobe epilepsy (TLE) patients experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, semantic memory), due to “collateral damage” to temporal regions outside the hippocampus following open surgical approaches. We predicted stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions critical for these cognitive processes. METHODS Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of nineteen patients with medically-intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, non-randomized, non-blinded, parallel group design. RESULTS Performance declines were significantly greater for the dominant TLE patients undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<.0001, η2=.57, & F=11.2, p<.001, η2=.39, respectively), and for the nondominant TLE patients undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<.02, η2=.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 undergoing standard surgical approaches declined on one or more measures for both object types (p<.001, Fisher’s exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (non-dominant) TLE patients declined on face recognition. SIGNIFICANCE Preliminary results suggest 1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and 2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. PMID:25489630
Collimator of multiple plates with axially aligned identical random arrays of apertures
NASA Technical Reports Server (NTRS)
Hoover, R. B.; Underwood, J. H. (Inventor)
1973-01-01
A collimator is disclosed for examining the spatial location of distant sources of radiation and for imaging by projection, small, near sources of radiation. The collimator consists of a plurality of plates, all of which are pierced with an identical random array of apertures. The plates are mounted perpendicular to a common axis, with like apertures on consecutive plates axially aligned so as to form radiation channels parallel to the common axis. For near sources, the collimator is interposed between the source and a radiation detector and is translated perpendicular to the common axis so as to project radiation traveling parallel to the common axis incident to the detector. For far sources the collimator is scanned by rotating it in elevation and azimuth with a detector to determine the angular distribution of the radiation from the source.
Andersson, Bodil; Hallén, Magnus; Leveau, Per; Bergenfelz, Anders; Westerdahl, Johan
2003-05-01
This study was designed to compare an open tension-free technique (Lichtenstein repair) with a laparoscopic totally extraperitoneal hernia repair (TEP). One hundred sixty-eight men aged 30 to 65 years with primary or recurrent inguinal hernia were randomized to TEP or open mesh technique in the manner of Lichtenstein. Follow-up was after 1 and 6 weeks, and 1 year. Eighty-one patients were randomized to TEP, and 87 to open repair. For 1 patient in each group, the operation was converted to a different type of repair. No difference was seen in overall complications between the 2 groups. However, 1 patient in the TEP group underwent operation for small bowel obstruction after surgery. A higher frequency of postoperative hematomas was seen in the open group (P <.05). Patients in the TEP group consumed less analgesic after surgery (P <.001), returned to work earlier (P <.01), and had a shorter time to full recovery (P <.01). Two recurrences occurred in the TEP group 1 year after surgery. The TEP technique was associated with less postoperative pain, a shorter time to full recovery, and an earlier return to work compared with the open tension-free repair. No difference was seen in overall complications. However, 2 recurrences did occur after 1 year in the TEP group.
Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease
Shamonin, Denis P.; Bron, Esther E.; Lelieveldt, Boudewijn P. F.; Smits, Marion; Klein, Stefan; Staring, Marius
2013-01-01
Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4–5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15–60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license. PMID:24474917
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
Massively parallel processor computer
NASA Technical Reports Server (NTRS)
Fung, L. W. (Inventor)
1983-01-01
An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Teleconsultation in type 1 diabetes mellitus (TELEDIABE).
Bertuzzi, Federico; Stefani, Ilario; Rivolta, Benedetta; Pintaudi, Basilio; Meneghini, Elena; Luzi, Livio; Mazzone, Antonino
2018-02-01
The growing incidence of diabetes and the need to contain healthcare costs empower the necessity to identify new models of care. Telemedicine offers an acknowledged instrument to provide clinical health care at a distance, increasing patient compliance and the achievement of therapeutical goals. The objective was to test the feasibility and the efficacy in the improvement of the glycemic control of the teleconsultation for patients with type 1 diabetes mellitus. A randomized open-label, parallel arms, controlled trial was conducted in two diabetes centers in Italy. Participants affected by type 1 diabetes mellitus have been randomly (1:1) assigned to receive their visits as standard or a web-based care. Patients in the teleconsultation group can arrange their appointments on a Web site and can also have access to web educational courses or to nutritional and psychological counseling. The primary outcome was the assessment of glycemic control by HbA1c measurement after a 12-month follow-up. Overall 74 participants were followed for 1 year. HbA1c changes were not statistically different within (p = 0.56 for standard care group; p = 0.45 for telemedicine group) and between (p = 0.60) groups when considering differences from baseline to the end of the study. Patients randomized to teleconsultation reported reduced severe hypoglycemic episodes (p = 0.03). In addition, they were largely satisfied with the activities, perceived a good improvement in the self-management of the diabetes, and reported to have a time saving and a cost reduction. In conclusion, TELEDIABE proposes a new system for the management of patients with type 1 diabetes mellitus.
Behavioral cessation treatment of waterpipe smoking: The first pilot randomized controlled trial
Asfar, Taghrid; Ali, Radwan Al; Rastam, Samer; Maziak, Wasim; Ward, Kenneth D.
2014-01-01
Background Waterpipe use has increased dramatically in the Middle East and other parts of the world. Many users exhibit signs of dependence, including withdrawal and difficulty quitting, but there is no evidence base to guide cessation efforts. Methods We developed a behavioral cessation program for willing-to-quit waterpipe users, and evaluated its feasibility and efficacy in a pilot, two arm, parallel group, randomized, open label trial in Aleppo, Syria. Fifty adults who smoked waterpipe ≥3 times per week in the last year, did not smoke cigarettes, and were interested in quitting were randomized to receive either brief (1 in-person session and 3 phone calls) or intensive (3 in-person sessions and 5 phone calls) behavioral cessation treatment delivered by a trained physician in a clinical setting. The primary efficacy end point of the developed interventions was prolonged abstinence at three months post-quit day, assessed by self-report and exhaled carbon monoxide levels of <10 ppm. Secondary end points were 7 day point-prevalent abstinence and adherence to treatment. Results Thirty percent of participants were fully adherent to treatment, which did not vary by treatment group. The proportions of participants in the brief and intensive interventions with prolonged abstinence at the 3-month assessment were 30.4% and 44.4%, respectively. Previous success in quitting (OR = 3.57; 95% CI = 1.03–12.43) predicted cessation. Higher baseline readiness to quit, more confidence in quitting, and being unemployed predicted a better adherence to treatment (all p-values <0.05). Conclusions Brief behavioral cessation treatment for waterpipe users appears to be feasible and effective. PMID:24629480
Porta-Roda, Oriol; Vara-Paniagua, Jesús; Díaz-López, Miguel A; Sobrado-Lozano, Pilar; Simó-González, Marta; Díaz-Bellido, Paloma; Reula-Blasco, María C; Muñoz-Garrido, Francisco
2015-08-01
To compare the efficacy and safety of Kegel exercises performed with or without, vaginal spheres as treatment for women with urinary incontinence. Multicentre parallel-group, open, randomized controlled trial. Women were allocated to either a pelvic floor muscle-training program consisting of Kegel exercises performed twice daily, 5 days/week at home, over 6 months with vaginal spheres, or to the same program without spheres. The primary endpoint was women's report of urinary incontinence at 6 months using the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-UI-SF). Secondary outcome measures were the 1 hr pad-test, King's Health Questionnaire (KHQ) and a five-point Likert scale for subjective evaluation. Adherence was measured with the Morisky-Green test. Thirty-seven women were randomized to the spheres group and 33 to the control group. The primary endpoint was evaluated in 65 women (35 in the spheres group vs. 30 controls). ICIQ-UI-SF results improved significantly at 1-month follow-up in the spheres group (P < 0.01) and at 6 months in the controls. The 1 hr pad-test improved in the spheres group but not in the control group. No significant differences were found in the KHQ results or in the subjective evaluation of efficacy and safety. Adherence was higher in the spheres group but differences were not significant. Mild transient side effects were reported in four patients in the spheres group and one in the control group. Both treatments improved urinary incontinence but women who performed the exercises with vaginal spheres showed an earlier improvement. Vaginal spheres were well tolerated and safe. © 2014 Wiley Periodicals, Inc.
Hashish, N M; Badway, H S; Abdelmoty, H I; Mowafy, A; Youssef, M A F M
2014-05-01
Follicular fluid of mature oocytes is rich in growth factors and cytokines that may exert paracrine and autocrine effects on implantation. The aim of this study was to investigate if flushing the endometrial cavity with follicular fluid after oocyte retrieval improved pregnancy rates in subfertile women undergoing intracytoplasmic sperm injection (ICSI). One hundred subfertile women undergoing ICSI between April 2012 and September 2012 at the centre for reproductive medicine, Cairo University, Egypt were enrolled in this open label, parallel randomized controlled study. Patients were randomized into two groups at the start of treatment using a computer-generated programme and sealed opaque envelopes: the follicular fluid group (n=50) and the control group (n=50). Inclusion criteria were: age 20-38 years; basal follicle-stimulating hormone <10mIU/ml; body mass index <35kg/m(2); and ostradiol >1000pg/ml and <4000pg/ml on the day of human chorionic gonadotrophin administration. Exclusion criteria were: evidence of endometriosis; uterine myoma; hydrosalpinges; endocrinological disorders; history of implantation failure in previous in-vitro fertilization/ICSI cycles; and severe male factor infertility. Clinical pregnancy and implantation rates were higher in the follicular fluid group compared with the control group [354% (17/48) vs 319% (15/47); p=0718] and (18.6% vs 11.3%; p=0.153), respectively. However, the difference was not statistically significant. Flushing the endometrial cavity with follicular fluid after oocyte retrieval neither improved nor adversely affected clinical pregnancy and implantation rates in subfertile women undergoing ICSI. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Kaeley, Gurjit S; Evangelisto, Amy M; Nishio, Midori J; Goss, Sandra L; Liu, Shufang; Kalabic, Jasmina; Kupper, Hartmut
2016-08-01
To examine the clinical and ultrasonographic (US) outcomes of reducing methotrexate (MTX) dosage upon initiating adalimumab (ADA) in MTX-inadequate responders with moderately to severely active rheumatoid arthritis (RA). MUSICA (NCT01185288) was a double-blind, randomized, parallel-arm study of 309 patients with RA receiving MTX ≥ 15 mg/week for ≥ 12 weeks before screening. Patients were randomized to high dosage (20 mg/week) or low dosage (7.5 mg/week) MTX; all patients received 40 mg open-label ADA every other week for 24 weeks. The primary endpoint was Week 24 mean 28-joint Disease Activity Score based on C-reactive protein (DAS28-CRP) to test for noninferiority of low-dosage MTX using a 15% margin. US images were scored using a 10-joint semiquantitative system incorporating OMERACT definitions for pathology, assessing synovial hypertrophy, vascularity, and bony erosions. Rapid improvement in clinical indices was observed in both groups after addition of ADA. The difference in mean DAS28-CRP (0.37, 95% CI 0.07-0.66) comparing low-dosage (4.12, 95% CI 3.88-4.34) versus high-dosage MTX (3.75, 95% CI 3.52-3.97) was statistically significant and non-inferiority was not met. Statistically significant differences were not detected for most clinical, functional, and US outcomes. Pharmacokinetic and safety profiles were similar. In MUSICA, Week 24 mean DAS28-CRP, the primary endpoint, did not meet non-inferiority for the low-dosage MTX group. Although the differences between the 2 MTX dosage groups were small, our study findings did not support routine MTX reduction in MTX inadequate responders initiating ADA.
Burgos, Jorge; Pijoan, José I; Osuna, Carmen; Cobos, Patricia; Rodriguez, Leire; Centeno, María del Mar; Serna, Rosa; Jimenez, Antonia; Garcia, Eugenia; Fernandez-Llebrez, Luis; Melchor, Juan C
2016-05-01
Our objective was to compare the effect of two pain relief methods (remifentanil vs. nitrous oxide) on the success rate of external cephalic version. We conducted a randomized open label parallel-group controlled single-center clinical trial with sequential design, at Cruces University Hospital, Spain. Singleton pregnancies in noncephalic presentation at term that were referred for external cephalic version were assigned according to a balanced (1:1) restricted randomization scheme to analgesic treatment with remifentanil or nitrous oxide during the procedure. The primary endpoint was external cephalic version success rate. Secondary endpoints were adverse event rate, degree of pain, cesarean rate and perinatal outcomes. The trial was stopped early after the second interim analysis due to a very low likelihood of finding substantial differences in efficacy (futility). The external cephalic version success rate was the same in the two arms (31/60, 51.7%) with 120 women recruited, 60 in each arm. The mean pain score was significantly lower in the remifentanil group (3.2 ± 2.4 vs. 6.0 ± 2.3; p < 0.01). No differences were found in external cephalic version-related complications. There was a trend toward a higher frequency of adverse effects in the remifentanil group (18.3% vs. 6.7%, p = 0.10), with a significantly higher incidence rate (21.7 events/100 women vs. 6.7 events/100 women with nitrous oxide, p = 0.03). All reported adverse events were mild and reversible. Remifentanil for analgesia decreased external cephalic version-related pain but did not increase the success rate of external cephalic version at term and appeared to be associated with an increased frequency of mild adverse effects. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.
Dror, Adi; Shemesh, Einav; Dayan, Natali
2014-01-01
The abilities of enzymes to catalyze reactions in nonnatural environments of organic solvents have opened new opportunities for enzyme-based industrial processes. However, the main drawback of such processes is that most enzymes have a limited stability in polar organic solvents. In this study, we employed protein engineering methods to generate a lipase for enhanced stability in methanol, which is important for biodiesel production. Two protein engineering approaches, random mutagenesis (error-prone PCR) and structure-guided consensus, were applied in parallel on an unexplored lipase gene from Geobacillus stearothermophilus T6. A high-throughput colorimetric screening assay was used to evaluate lipase activity after an incubation period in high methanol concentrations. Both protein engineering approaches were successful in producing variants with elevated half-life values in 70% methanol. The best variant of the random mutagenesis library, Q185L, exhibited 23-fold-improved stability, yet its methanolysis activity was decreased by one-half compared to the wild type. The best variant from the consensus library, H86Y/A269T, exhibited 66-fold-improved stability in methanol along with elevated thermostability (+4.3°C) and a 2-fold-higher fatty acid methyl ester yield from soybean oil. Based on in silico modeling, we suggest that the Q185L substitution facilitates a closed lid conformation that limits access for both the methanol and substrate excess into the active site. The enhanced stability of H86Y/A269T was a result of formation of new hydrogen bonds. These improved characteristics make this variant a potential biocatalyst for biodiesel production. PMID:24362426
Tamura, Kazuo; Kawai, Yasukazu; Kiguchi, Toru; Okamoto, Masataka; Kaneko, Masahiko; Maemondo, Makoto; Gemba, Kenichi; Fujimaki, Katsumichi; Kirito, Keita; Goto, Tetsuya; Fujisaki, Tomoaki; Takeda, Kenji; Nakajima, Akihiro; Ueda, Takanori
2016-10-01
Control of serum uric acid (sUA) levels is very important during chemotherapy in patients with malignant tumors, as the risks of tumor lysis syndrome (TLS) and renal events are increased with increasing levels of sUA. We investigated the efficacy and safety of febuxostat, a potent non-purine xanthine oxidase inhibitor, compared with allopurinol for prevention of hyperuricemia in patients with malignant tumors, including solid tumors, receiving chemotherapy in Japan. An allopurinol-controlled multicenter, open-label, randomized, parallel-group comparative study was carried out. Patients with malignant tumors receiving chemotherapy, who had an intermediate risk of TLS or a high risk of TLS and were not scheduled to be treated with rasburicase, were enrolled and then randomized to febuxostat (60 mg/day) or allopurinol (300 or 200 mg/day). All patients started to take the study drug 24 h before chemotherapy. The primary objective was to confirm the non-inferiority of febuxostat to allopurinol based on the area under the curve (AUC) of sUA for a 6-day treatment period. Forty-nine and 51 patients took febuxostat and allopurinol, respectively. sUA decreased over time after initiation of study treatment. The least squares mean difference of the AUC of sUA between the treatment groups was -33.61 mg h/dL, and the 95 % confidence interval was -70.67 to 3.45, demonstrating the non-inferiority of febuxostat to allopurinol. No differences were noted in safety outcomes between the treatment groups. Febuxostat demonstrated an efficacy and safety similar to allopurinol in patients with malignant tumors receiving chemotherapy. http://www.clinicaltrials.jp ; Identifier: JapicCTI-132398.
Oxcarbazepine in migraine headache: a double-blind, randomized, placebo-controlled study.
Silberstein, S; Saper, J; Berenson, F; Somogyi, M; McCague, K; D'Souza, J
2008-02-12
To evaluate the efficacy, safety, and tolerability of oxcarbazepine (1,200 mg/day) vs placebo as prophylactic therapy for patients with migraine headaches. This multicenter, double-blind, randomized, placebo-controlled, parallel-group trial consisted of a 4-week single-blind baseline phase and a 15-week double-blind phase consisting of a 6-week titration period, an 8-week maintenance period, and a 1-week down-titration period, after which patients could enter a 13-week open-label extension phase. During the 6-week titration period, oxcarbazepine was initiated at 150 mg/day and increased by 150 mg/day every 5 days to a maximum tolerated dose of 1,200 mg/day. The primary outcome measure was change from baseline in the number of migraine attacks during the last 28-day period of the double-blind phase. Eighty-five patients were randomized to receive oxcarbazepine and 85 to receive placebo. There was no difference between the oxcarbazepine (-1.30) and placebo groups in mean change in number of migraine attacks from baseline during the last 28 days of double-blind phase (-1.74; p = 0.2274). Adverse events were reported for 68 oxcarbazepine-treated patients (80%) and 55 placebo-treated patients (65%). The majority of adverse events were mild or moderate in severity. The most common adverse events (>or=15% of patients) in the oxcarbazepine-treated group were fatigue (20.0%), dizziness (17.6%), and nausea (16.5%); no adverse event occurred in more than 15% of the placebo-treated patients. Overall, oxcarbazepine was safe and well tolerated; however, oxcarbazepine did not show efficacy in the prophylactic treatment of migraine headaches.
Davis, S; Gralla, J; Chan, L; Wiseman, A; Edelstein, C L
2018-06-01
The mammalian target of rapamycin (mTOR) pathway has been shown to be central to cyst formation and growth in patients with autosomal dominant polycystic kidney disease (ADPKD). Drugs that suppress mTOR signaling are frequently used as antiproliferative agents for maintenance immunosuppression in patients who have undergone kidney transplantation. The aim of this study was to determine the effect of sirolimus, an mTOR inhibitor, on cyst volume regression in patients with ADPKD who have undergone renal transplantation. In this single-center, prospective, open-label, parallel-group, randomized trial, 23 adult patients with ADPKD who successfully underwent renal transplantation from 2008 to 2012 were subsequently randomized (on a 1:1 basis) to a maintenance immunosuppression regimen with either sirolimus (sirolimus, tacrolimus, prednisone) or mycophenolate (mycophenolate, tacrolimus, prednisone). Total kidney volumes were measured by means of high-resolution magnetic resonance imaging within 2 weeks after transplantation and at 1 year. The primary end point was change in total kidney volume at 1 year. Sixteen patients completed the 1-year study (8 patients in each group). There was a decrease in kidney volume in both the sirolimus group (percentage change from baseline, 20.5%; P < .001) and mycophenolate group (percentage change from baseline, 17%; P = .048), but there was no significant difference in percentage change of total kidney volume between the groups (P = .665). In ADPKD patients at 1 year after kidney transplantation, there was a similar decrease in polycystic kidney volume in patients receiving an immunosuppression regimen containing sirolimus compared with patients receiving mycophenolate. Copyright © 2018 Elsevier Inc. All rights reserved.
Qiu, Ju; Liu, Yanping; Yue, Yanfen; Qin, Yuchang; Li, Zaigui
2016-12-01
Tartary buckwheat (TB) is rich in protein, dietary fiber, and flavonoids and has been reported to affect type 2 diabetes mellitus (T2DM) in animal experiments, but limited information on the benefit of TB as a whole food in T2DM patients is available. Thus, we tested the hypothesis that a daily replacement of a portion of the staple food with TB will improve risk factors of T2DM, including fasting glucose, insulin resistance, and lipid profile. In a parallel, randomized, open-label, controlled trial, 165 T2DM patients were randomly assigned to a control diet group (DC group; systematic diet plans and intensive nutritional education) or a TB intervention group (TB group; daily replacement of a portion of staple food with TB food). Blood samples and diet information were collected at baseline and after 4 weeks of intervention. The TB group decreased fasting insulin (2.46-2.39 Ln mU/L), total cholesterol (5.08-4.79 mmol/L), and low-density lipoprotein cholesterol (3.00-2.80 mmol/L) compared with the DC group at 4 weeks (P<.05). No significant differences in blood glucose or glycated hemoglobin levels were noted between the TB and DC groups. In addition, subgroup analyses based on daily TB intake dose showed a reduction in insulin, total cholesterol, and low-density lipoprotein cholesterol, but also insulin resistance was observed when TB intake dose was greater than 110 g/d. These results support the hypothesis that TB may improve insulin resistance and lipid profile in T2DM patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Bouchi, Ryotaro; Nakano, Yujiro; Fukuda, Tatsuya; Takeuchi, Takato; Murakami, Masanori; Minami, Isao; Izumiyama, Hajime; Hashimoto, Koshi; Yoshimoto, Takanobu; Ogawa, Yoshihiro
2017-03-31
Liraglutide, an analogue of human glucagon-like peptide 1, reduces cardiovascular events in patients with type 2 diabetes; however, it has still been unknown by which mechanisms liraglutide could reduce cardiovascular events. Type 2 diabetic patients with insulin treatment were enrolled in this randomized, open-label, comparative study. Participants were randomly assigned to liraglutide plus insulin (liraglutide group) and insulin treatment (control group) at 1:1 allocation. Primary endpoint was the change in viscera fat are (VFA, cm 2 ) at 24 weeks. Liver attenuation index (LAI) measured by abdominal computed tomography, urinary albumin-to-creatinine ratio (ACR, mg/g), and C-reactive protein (CRP) levels, skeletal muscle index (SMI), and quality of life (QOL) related to diabetes treatment were also determined. Seventeen patients (8; liraglutide group, 9; control group, mean age 59 ± 13 years; 53% female) completed this study. Liraglutide treatment significantly reduced VFA at 24 weeks; whereas, SFA was unchanged. ACR, LAI, and CRP levels were significantly reduced by liraglutide at 24 weeks and there was no difference in SMI between the two groups. Changes in VFA from baseline to 24 weeks were significantly associated with those in LAI, albuminuria, and HbA1c. Liraglutide treatment significantly improved QOL scores associated with anxiety and dissatisfaction with treatment and satisfaction with treatment. No severe adverse events were observed in both groups. Our data suggest that liraglutide could reduce visceral adiposity in parallel with attenuation of hepatic fat accumulation, albuminuria and micro-inflammation and improve QOL related to diabetes care in insulin-treated patients with type 2 diabetes.
Kosaka, H; Okamoto, Y; Munesue, T; Yamasue, H; Inohara, K; Fujioka, T; Anme, T; Orisaka, M; Ishitobi, M; Jung, M; Fujisawa, T X; Tanaka, S; Arai, S; Asano, M; Saito, D N; Sadato, N; Tomoda, A; Omori, M; Sato, M; Okazawa, H; Higashida, H; Wada, Y
2016-01-01
Recent studies have suggested that long-term oxytocin administration can alleviate the symptoms of autism spectrum disorder (ASD); however, factors influencing its efficacy are still unclear. We conducted a single-center phase 2, pilot, randomized, double-blind, placebo-controlled, parallel-group, clinical trial in young adults with high-functioning ASD, to determine whether oxytocin dosage and genetic background of the oxytocin receptor affects oxytocin efficacy. This trial consisted of double-blind (12 weeks), open-label (12 weeks) and follow-up phases (8 weeks). To examine dose dependency, 60 participants were randomly assigned to high-dose (32 IU per day) or low-dose intranasal oxytocin (16 IU per day), or placebo groups during the double-blind phase. Next, we measured single-nucleotide polymorphisms (SNPs) in the oxytocin receptor gene (OXTR). In the intention-to-treat population, no outcomes were improved after oxytocin administration. However, in male participants, Clinical Global Impression-Improvement (CGI-I) scores in the high-dose group, but not the low-dose group, were significantly higher than in the placebo group. Furthermore, we examined whether oxytocin efficacy, reflected in the CGI-I scores, is influenced by estimated daily dosage and OXTR polymorphisms in male participants. We found that >21 IU per day oxytocin was more effective than ⩽21 IU per day, and that a SNP in OXTR (rs6791619) predicted CGI-I scores for ⩽21 IU per day oxytocin treatment. No severe adverse events occurred. These results suggest that efficacy of long-term oxytocin administration in young men with high-functioning ASD depends on the oxytocin dosage and genetic background of the oxytocin receptor, which contributes to the effectiveness of oxytocin treatment of ASD. PMID:27552585
Kosaka, H; Okamoto, Y; Munesue, T; Yamasue, H; Inohara, K; Fujioka, T; Anme, T; Orisaka, M; Ishitobi, M; Jung, M; Fujisawa, T X; Tanaka, S; Arai, S; Asano, M; Saito, D N; Sadato, N; Tomoda, A; Omori, M; Sato, M; Okazawa, H; Higashida, H; Wada, Y
2016-08-23
Recent studies have suggested that long-term oxytocin administration can alleviate the symptoms of autism spectrum disorder (ASD); however, factors influencing its efficacy are still unclear. We conducted a single-center phase 2, pilot, randomized, double-blind, placebo-controlled, parallel-group, clinical trial in young adults with high-functioning ASD, to determine whether oxytocin dosage and genetic background of the oxytocin receptor affects oxytocin efficacy. This trial consisted of double-blind (12 weeks), open-label (12 weeks) and follow-up phases (8 weeks). To examine dose dependency, 60 participants were randomly assigned to high-dose (32 IU per day) or low-dose intranasal oxytocin (16 IU per day), or placebo groups during the double-blind phase. Next, we measured single-nucleotide polymorphisms (SNPs) in the oxytocin receptor gene (OXTR). In the intention-to-treat population, no outcomes were improved after oxytocin administration. However, in male participants, Clinical Global Impression-Improvement (CGI-I) scores in the high-dose group, but not the low-dose group, were significantly higher than in the placebo group. Furthermore, we examined whether oxytocin efficacy, reflected in the CGI-I scores, is influenced by estimated daily dosage and OXTR polymorphisms in male participants. We found that >21 IU per day oxytocin was more effective than ⩽21 IU per day, and that a SNP in OXTR (rs6791619) predicted CGI-I scores for ⩽21 IU per day oxytocin treatment. No severe adverse events occurred. These results suggest that efficacy of long-term oxytocin administration in young men with high-functioning ASD depends on the oxytocin dosage and genetic background of the oxytocin receptor, which contributes to the effectiveness of oxytocin treatment of ASD.
Long-Term Effect of Pravastatin on Carotid Intima–Media Complex Thickness
Toyoda, Kazunori; Minematsu, Kazuo; Yasaka, Masahiro; Nagai, Yoji; Aoki, Shiro; Nezu, Tomohisa; Hosomi, Naohisa; Kagimura, Tatsuo; Origasa, Hideki; Kamiyama, Kenji; Suzuki, Rieko; Ohtsuki, Toshiho; Maruyama, Hirofumi; Kitagawa, Kazuo; Uchiyama, Shinichiro; Matsumoto, Masayasu
2018-01-01
Background and Purpose— The effect of statins on progression of carotid intima–media complex thickness (IMT) has been shown exclusively in nonstroke Western patients. This study aimed to determine the effect of low-dose pravastatin on carotid IMT in Japanese patients with noncardioembolic ischemic stroke. Methods— This is a substudy of the J-STARS trial (Japan Statin Treatment Against Recurrent Stroke), a multicenter, randomized, open-label, parallel-group trial to examine whether pravastatin reduces stroke recurrence. Patients were randomized to receive pravastatin (10 mg daily, usual dose in Japan; pravastatin group) or not to receive any statins (control group). The primary outcome was IMT change of the common carotid artery for a 5-year observation period. IMT change was compared using mixed-effects models for repeated measures. Results— Of 864 patients registered in this substudy, 71 without baseline ultrasonography were excluded, and 388 were randomly assigned to the pravastatin group and 405 to the control group. Baseline characteristics were not significantly different, except National Institutes of Health Stroke Scale scores (median, 0 [interquartile range, 0–2] versus 1 [interquartile range, 0–2]; P=0.019) between the 2 groups. Baseline IMT (mean±SD) was 0.887±0.155 mm in the pravastatin group and 0.887±0.152 mm in the control group (P=0.99). The annual change in the IMT at 5-year visit was significantly reduced in the pravastatin group as compared with that in the control group (0.021±0.116 versus 0.040±0.118 mm; P=0.010). Conclusions— The usual Japanese dose of pravastatin significantly reduced the progression of carotid IMT at 5 years in patients with noncardioembolic stroke. Clinical Trial Registration— URL: http://www.clinicaltrials.gov. Unique identifier: NCT00361530. PMID:29191850
Rusconi, Stefano; Vitiello, Paola; Adorni, Fulvio; Colella, Elisa; Focà, Emanuele; Capetti, Amedeo; Meraviglia, Paola; Abeli, Clara; Bonora, Stefano; D'Annunzio, Marco; Di Biagio, Antonio; Di Pietro, Massimo; Butini, Luca; Orofino, Giancarlo; Colafigli, Manuela; d'Ettorre, Gabriella; Francisci, Daniela; Parruti, Giustino; Soria, Alessandro; Buonomini, Anna Rita; Tommasi, Chiara; Mosti, Silvia; Bai, Francesca; Di Nardo Stuppino, Silvia; Morosi, Manuela; Montano, Marco; Tau, Pamela; Merlini, Esther; Marchetti, Giulia
2013-01-01
Immunological non-responders (INRs) lacked CD4 increase despite HIV-viremia suppression on HAART and had an increased risk of disease progression. We assessed immune reconstitution profile upon intensification with maraviroc in INRs. We designed a multi-centric, randomized, parallel, open label, phase 4 superiority trial. We enrolled 97 patients on HAART with CD4+<200/µL and/or CD4+ recovery ≤ 25% and HIV-RNA<50 cp/mL. Patients were randomized 1:1 to HAART+maraviroc or continued HAART. CD4+ and CD8+ CD45+RA/RO, Ki67 expression and plasma IL-7 were quantified at W0, W12 and W48. By W48 both groups displayed a CD4 increase without a significant inter-group difference. A statistically significant change in CD8 favored patients in arm HAART+maraviroc versus HAART at W12 (p=.009) and W48 (p=.025). The CD4>200/µL and CD4>200/µL + CD4 gain ≥ 25% end-points were not satisfied at W12 (p=.24 and p=.619) nor at W48 (p=.076 and p=.236). Patients continuing HAART displayed no major changes in parameters of T-cell homeostasis and activation. Maraviroc-receiving patients experienced a significant rise in circulating IL-7 by W48 (p=.01), and a trend in temporary reduction in activated HLA-DR+CD38+CD4+ by W12 (p=.06) that was not maintained at W48. Maraviroc intensification in INRs did not have a significant advantage in reconstituting CD4 T-cell pool, but did substantially expand CD8. It resulted in a low rate of treatment discontinuations. ClinicalTrials.gov NCT00884858 http://clinicaltrials.gov/show/NCT00884858.
Patil, Vijay Maruti; Noronha, Vanita; Joshi, Amit; Muddu, Vamshi Krishna; Dhumal, Sachin; Bhosale, Bharatsingh; Arya, Supreeta; Juvekar, Shashikant; Banavali, Shripad; D'Cruz, Anil; Bhattacharjee, Atanu; Prabhash, Kumar
2015-03-01
Cetuximab based treatment is the recommended chemotherapy for head and neck squamous cell cancers in the palliative setting. However, due to financial constraints, intravenous (IV) chemotherapy without cetuximab is commonly used in lesser developed countries. We believe that oral metronomic chemotherapy may be safer and more effective in this setting. We conducted an open label, superiority, parallel design, randomized phase II trial comparing oral MCT [daily celecoxib (200mg twice daily) and weekly methotrexate (15mg/m(2))] to intravenous single agent cisplatin (IP) (75mg/m(2)) given 3 weekly. Eligible patients had head and neck cancers requiring palliative chemotherapy with ECOG PS 0-2 and adequate organ functions who could not afford cetuximab. The primary end point was progression-free survival. 110 Patients were recruited between July 2011 to May 2013, 57 randomized to the MCT arm and 53 to the IP arm. Patients in the MCT arm had significantly longer PFS (median 101 days, 95% CI: 58.2-143.7 days) compared to the IP arm (median 66 days, 95% CI; 55.8-76.1 days) (p=0.014). The overall survival (OS) was also increased significantly in the MCT arm (median 249 days, 95% CI: 222.5-275.5 days) compared to the IP arm (median 152 days, 95% CI: 104.2-199.8 days) (p=0.02). There were fewer grade 3/4 adverse effects with MCT, which was not significant. (18.9% vs. 31.4%, P=0.14). Oral metronomic chemotherapy has significantly better PFS and OS than single agent platinum in the palliative setting. Copyright © 2014 Elsevier Ltd. All rights reserved.
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel
2018-01-01
This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.
BCYCLIC: A parallel block tridiagonal matrix cyclic solver
NASA Astrophysics Data System (ADS)
Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.
2010-09-01
A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.
Parallelization of the preconditioned IDR solver for modern multicore computer systems
NASA Astrophysics Data System (ADS)
Bessonov, O. A.; Fedoseyev, A. I.
2012-10-01
This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
Gyrokinetic continuum simulation of turbulence in a straight open-field-line plasma
Shi, E. L.; Hammett, G. W.; Stoltzfus-Dueck, T.; ...
2017-05-29
Here, five-dimensional gyrokinetic continuum simulations of electrostatic plasma turbulence in a straight, open-field-line geometry have been performed using a full- discontinuous-Galerkin approach implemented in the Gkeyll code. While various simplifications have been used for now, such as long-wavelength approximations in the gyrokinetic Poisson equation and the Hamiltonian, these simulations include the basic elements of a fusion-device scrape-off layer: localised sources to model plasma outflow from the core, cross-field turbulent transport, parallel flow along magnetic field lines, and parallel losses at the limiter or divertor with sheath-model boundary conditions. The set of sheath-model boundary conditions used in the model allows currentsmore » to flow through the walls. In addition to details of the numerical approach, results from numerical simulations of turbulence in the Large Plasma Device, a linear device featuring straight magnetic field lines, are presented.« less
LAMMPS strong scaling performance optimization on Blue Gene/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffman, Paul; Jiang, Wei; Romero, Nichols A.
2014-11-12
LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less
Hybrid MPI+OpenMP Programming of an Overset CFD Solver and Performance Investigations
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Jin, Haoqiang H.; Biegel, Bryan (Technical Monitor)
2002-01-01
This report describes a two level parallelization of a Computational Fluid Dynamic (CFD) solver with multi-zone overset structured grids. The approach is based on a hybrid MPI+OpenMP programming model suitable for shared memory and clusters of shared memory machines. The performance investigations of the hybrid application on an SGI Origin2000 (O2K) machine is reported using medium and large scale test problems.
Open SHMEM Reference Implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritchard, Howard; Curtis, Anthony; Welch, Aaron
2016-05-12
OpenSHMEM is an effort to create a specification for a standardized API for parallel programming in the Partitioned Global Address Space. Along with the specification the project is also creating a reference implementation of the API. This implementation attempts to be portable, to allow it to be deployed in multiple environments, and to be a starting point for implementations targeted to particular hardware platforms. It will also serve as a springboard for future development of the API.
Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening
Kamimura, HAS; Wang, S; Wu, S-Y; Karakatsani, ME; Acosta, C; Carneiro, AAO; Konofagou, EE
2015-01-01
Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n=15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38±5.71 mm3, 8.91±3.91 mm3 and 35.47 ± 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40±28.43 V.s, 63.87±29.97 V.s and 356.52±257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue. PMID:26394091
Dickson, Richard K.
2010-09-07
A quick insert and release laser beam guard panel clamping apparatus having a base plate mountable on an optical table, a first jaw affixed to the base plate, and a spring-loaded second jaw slidably carried by the base plate to exert a clamping force. The first and second jaws each having a face acutely angled relative to the other face to form a V-shaped, open channel mouth, which enables wedge-action jaw separation by and subsequent clamping of a laser beam guard panel inserted through the open channel mouth. Preferably, the clamping apparatus also includes a support structure having an open slot aperture which is positioned over and parallel with the open channel mouth.
Deep crustal deformation by sheath folding in the Adirondack Mountains, USA
NASA Technical Reports Server (NTRS)
Mclelland, J. M.
1988-01-01
As described by McLelland and Isachsen, the southern half of the Adirondacks are underlain by major isoclinal (F sub 1) and open-upright (F sub 2) folds whose axes are parallel, trend approximately E-W, and plunge gently about the horizontal. These large structures are themselves folded by open upright folds trending NNE (F sub 3). It is pointed out that elongation lineations in these rocks are parallel to X of the finite strain ellipsoid developed during progressive rotational strain. The parallelism between F sub 1 and F sub 2 fold axes and elongation lineations led to the hypothesis that progressive rotational strain, with a west-directed tectonic transport, rotated earlier F sub 1-folds into parallelism with the evolving elongation lineation. Rotation is accomplished by ductile, passive flow of F sub 1-axes into extremely arcuate, E-W hinges. In order to test these hypotheses a number of large folds were mapped in the eastern Adirondacks. Other evidence supporting the existence of sheath folds in the Adirondacks is the presence, on a map scale, of synforms whose limbs pass through the vertical and into antiforms. This type of outcrop pattern is best explained by intersecting a horizontal plane with the double curvature of sheath folds. It is proposed that sheath folding is a common response of hot, ductile rocks to rotational strain at deep crustal levels. The recognition of sheath folds in the Adirondacks reconciles the E-W orientation of fold axes with an E-W elongation lineation.
GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik
2013-01-01
Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358
2014-01-01
Background There is a need for evidence of the clinical effectiveness of minimally invasive surgery for the treatment of esophageal cancer, but randomized controlled trials in surgery are often difficult to conduct. The ROMIO (Randomized Open or Minimally Invasive Oesophagectomy) study will establish the feasibility of a main trial which will examine the clinical and cost-effectiveness of minimally invasive and open surgical procedures for the treatment of esophageal cancer. Methods/Design A pilot randomized controlled trial (RCT), in two centers (University Hospitals Bristol NHS Foundation Trust and Plymouth Hospitals NHS Trust) will examine numbers of incident and eligible patients who consent to participate in the ROMIO study. Interventions will include esophagectomy by: (1) open gastric mobilization and right thoracotomy, (2) laparoscopic gastric mobilization and right thoracotomy, and (3) totally minimally invasive surgery (in the Bristol center only). The primary outcomes of the feasibility study will be measures of recruitment, successful development of methods to monitor quality of surgery and fidelity to a surgical protocol, and development of a core outcome set to evaluate esophageal cancer surgery. The study will test patient-reported outcomes measures to assess recovery, methods to blind participants, assessments of surgical morbidity, and methods to capture cost and resource use. ROMIO will integrate methods to monitor and improve recruitment using audio recordings of consultations between recruiting surgeons, nurses, and patients to provide feedback for recruiting staff. Discussion The ROMIO study aims to establish efficient methods to undertake a main trial of minimally invasive surgery versus open surgery for esophageal cancer. Trial registration The pilot trial has Current Controlled Trials registration number ISRCTN59036820(25/02/2013) at http://www.controlled-trials.com; the ROMIO trial record at that site gives a link to the original version of the study protocol. PMID:24888266
Scalar collapse in AdS with an OpenCL open source code
NASA Astrophysics Data System (ADS)
Liebling, Steven L.; Khanna, Gaurav
2017-10-01
We study the spherically symmetric collapse of a scalar field in anti-de Sitter spacetime using a newly constructed, open-source code which parallelizes over heterogeneous architectures using the open standard OpenCL. An open question for this scenario concerns how to tell, a priori, whether some form of initial data will be stable or will instead develop under the turbulent instability into a black hole in the limit of vanishing amplitude. Previous work suggested the existence of islands of stability around quasi-periodic solutions, and we use this new code to examine the stability properties of approximately quasi-periodic solutions which balance energy transfer to higher modes with energy transfer to lower modes. The evolutions provide some evidence, though not conclusively, for stability of initial data sufficiently close to quasiperiodic solutions.
Stenberg, Erik; Szabo, Eva; Ottosson, Johan; Thorell, Anders; Näslund, Ingmar
2018-01-01
Mesenteric defect closure in laparoscopic gastric bypass surgery has been reported to reduce the risk for small bowel obstruction. Little is known, however, about the effect of mesenteric defect closure on patient-reported outcome. The aim of the present study was to see if mesenteric defect closure affects health-related quality-of-life (HRQoL) after laparoscopic gastric bypass. Patients operated at 12 centers for bariatric surgery participated in this randomized two-arm parallel study. During the operation, patients were randomized to closure of the mesenteric defects or non-closure. This study was a post-hoc analysis comparing HRQoL of the two groups before surgery, at 1 and 2 years after the operation. HRQoL was estimated using the short form 36 (SF-36-RAND) and the obesity problems (OP) scale. Between May 1, 2010, and November 14, 2011, 2507 patients were included in the study and randomly assigned to mesenteric defect closure (n = 1259) or non-closure (n = 1248). In total, 1619 patients (64.6%) reported on their HRQoL at the 2-year follow-up. Mesenteric defect closure was associated with slightly higher rating of social functioning (87 ± 22.1 vs. 85 ± 24.2, p = 0.047) and role emotional (85 ± 31.5 vs. 82 ± 35.0, p = 0.027). No difference was seen on the OP scale (open defects 22 ± 24.8 vs. closed defects 20 ± 23.8, p = 0.125). When comparing mesenteric defect closure with non-closure, there is no clinically relevant difference in HRQoL after laparoscopic gastric bypass surgery.
Sheikhmoonesi, Fatemeh; Zarghami, Mehran; Mamashli, Shima; Yazdani Charati, Jamshid; Hamzehpour, Romina; Fattahi, Samineh; Azadbakht, Rahil; Kashi, Zahra; Ala, Shahram; Moshayedi, Mona; Alinia, Habibollah; Hendouei, Narjes
2016-01-01
In this study, the aim was to determine whether adding vitamin D to the standard therapeutic regimen of schizophrenic male patients with inadequate vitamin D status could improve some aspects of the symptom burden or not. This study was an open parallel label randomized clinical trial. Eighty patients with chronic stable schizophrenia with residual symptoms and Vitamin D deficiency were recruited randomly and then received either 600000 IU Vitamin D injection once along with their antipsychotic regimen or with their antipsychotic regimen only. Serum vitamin D was measured twice: first at the baseline and again on the fourth month. Positive and Negative Syndrome Scale (PANSS) was assessed at the baseline and on the fourth month. During the study, the vitamin D serum changes in vitamin group and control group were 22.1 ± 19.9(95%CI = 15.9-28.8) and 0.2 ± 1.7(95%CI = 0.2-0.8) (ng/mL) (p<0.001) respectively. The changes of PANSS positive subscale score (P) were -0.1±0.7 (95%CI =-0.3-0.1) and 0.00 ± 0.8 (95%CI = -0.2-0.2) in vitamin D and control group respectively (p=0.5). The changes of PANSS negative subscale score (N) were -0.1 ± 0.7 (95%CI = -0.3-0.05) and -0.1 ± 0.5 (95%CI = -0.2-0.04) in vitamin D and control group respectively (p = 0.7) and there was a negative but not significant correlation between serum vitamin D level changes and PANSS negative subscale score (r = -0.04, p = 0.7). We did not find a relationship between serum vitamin D level changes and the improvement of negative and positive symptoms in schizophrenic patients and more randomized clinical trials are required to confirm our findings.
Klukowska, Malgorzata; Grender, Julie M; Conde, Erinn; Ccahuana-Vasquez, Renzo Alberto; Ram Goyal, C
2014-08-01
To compare the efficacy of an oscillating-rotating power toothbrush with a novel brush head incorporating angled CrissCross bristles (Oral-B Pro 7000 SmartSeries and Oral-B CrossAction brush head) versus a marketed sonic toothbrush (Colgate ProClinical A1500 with the Triple Clean brush head) in the reduction of gingivitis and plaque over a 6-week period. This was a single center, randomized, open label, examiner-blind, 2-treatment, parallel group study. Study participants who met the entrance criteria were enrolled in the study and randomly assigned to one of the two toothbrush groups. Study participants brushed with their assigned toothbrush and a marketed fluoride dentifrice for 2 minutes twice daily at home for 6 weeks. Gingivitis and plaque were evaluated at baseline and Week 6. Gingivitis was assessed using the Modified Gingival Index (MGI) and Gingival Bleeding Index (GBI) and plaque was assessed using the Rustogi Modified Navy Plaque Index (RMNPI). Data was analyzed using the ANCOVA with baseline as the covariate. In total, 130 study participants were randomized to treatment resulting in 64 study participants per group completing the study. Both brushes produced statistically significant (P < 0.001) reductions in gingivitis and plaque measures relative to baseline. The oscillating-rotating,brush with the novel brush head demonstrated statistically significantly (P < 0.05) greater reductions in all gingivitis measures, as well as whole mouth and interproximal plaque measures, compared to the sonic toothbrush. The benefit for the oscillating- rotating brush over the sonic brush was 21.3% for gingivitis, 35.7% for gingival bleeding, 34.7% for number of bleeding sites, 17.4% for whole mouth plaque, and 21.2% for interproximal plaque. There were no adverse events reported or observed for either brush.
Aguado, José María; Vázquez, Lourdes; Fernández-Ruiz, Mario; Villaescusa, Teresa; Ruiz-Camps, Isabel; Barba, Pere; Silva, Jose T; Batlle, Montserrat; Solano, Carlos; Gallardo, David; Heras, Inmaculada; Polo, Marta; Varela, Rosario; Vallejo, Carlos; Olave, Teresa; López-Jiménez, Javier; Rovira, Montserrat; Parody, Rocío; Cuenca-Estrella, Manuel
2015-02-01
The benefit of the combination of serum galactomannan (GM) assay and polymerase chain reaction (PCR)-based detection of serum Aspergillus DNA for the early diagnosis and therapy of invasive aspergillosis (IA) in high-risk hematological patients remains unclear. We performed an open-label, controlled, parallel-group randomized trial in 13 Spanish centers. Adult patients with acute myeloid leukemia and myelodysplastic syndrome on induction therapy or allogeneic hematopoietic stem cell transplant recipients were randomized (1:1 ratio) to 1 of 2 arms: "GM-PCR group" (the results of serial serum GM and PCR assays were provided to treating physicians) and "GM group" (only the results of serum GM were informed). Positivity in either assay prompted thoracic computed tomography scan and initiation of antifungal therapy. No antimold prophylaxis was permitted. Overall, 219 patients underwent randomization (105 in the GM-PCR group and 114 in the GM group). The cumulative incidence of "proven" or "probable" IA (primary study outcome) was lower in the GM-PCR group (4.2% vs 13.1%; odds ratio, 0.29 [95% confidence interval, .09-.91]). The median interval from the start of monitoring to the diagnosis of IA was lower in the GM-PCR group (13 vs 20 days; P = .022), as well as the use of empirical antifungal therapy (16.7% vs 29.0%; P = .038). Patients in the GM-PCR group had higher proven or probable IA-free survival (P = .027). A combined monitoring strategy based on serum GM and Aspergillus DNA was associated with an earlier diagnosis and a lower incidence of IA in high-risk hematological patients. Clinical Trials Registration. NCT01742026. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Motivational Enhancement for Increasing Adherence to CPAP: A Randomized Controlled Trial.
Bakker, Jessie P; Wang, Rui; Weng, Jia; Aloia, Mark S; Toth, Claudia; Morrical, Michael G; Gleason, Kevin J; Rueschman, Michael; Dorsey, Cynthia; Patel, Sanjay R; Ware, James H; Mittleman, Murray A; Redline, Susan
2016-08-01
Motivational enhancement (ME) shows promise as a means of increasing adherence to CPAP for OSA. We performed an open-label, parallel-arm, randomized controlled trial of CPAP only or CPAP + ME, recruiting individuals 45 to 75 years with moderate or severe OSA without marked sleepiness and with either established cardiovascular disease (CVD) or at risk for CVD. All participants received standardized CPAP support from a sleep technologist; those randomly assigned to CPAP + ME also received standardized ME delivered by a psychologist during two appointments and six phone calls over 32 weeks. Mixed-effect models with subject-specific intercepts and slopes were fitted to compare objective CPAP adherence between arms, adjusting for follow-up duration, randomization factors, and device manufacturer. All analyses were intention-to-treat. Overall, 83 participants (n = 42 CPAP only; n = 41 CPAP + ME) contributed 14,273 nights of data for 6 months. Participants were predominantly male (67%) and had a mean ± SD age of 63.9 ± 7.4 years, a BMI of 31.1 ± 5.2 kg/m(2), and an apnea-hypopnea index of 26.2 ± 12.9 events/h. In our fully adjusted model, average nightly adherence for 6 months was 99.0 min/night higher with CPAP + ME compared with CPAP only (P = .003; primary analysis). A subset of 52 participants remained in the study for 12 months; modeling these data yielded a consistent difference in adherence between arms of 97 min/night (P = .006) favoring CPAP + ME. ME delivered during brief appointments and phone calls resulted in a clinically significant increase in CPAP adherence. This strategy may represent a feasible approach for optimizing management of OSA. ClinicalTrials.gov; No.: NCT01261390; URL: www.clinicaltrials.gov. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Priotto, Gerardo; Fogg, Carole; Balasegaram, Manica; Erphas, Olema; Louga, Albino; Checchi, Francesco; Ghabri, Salah; Piola, Patrice
2006-01-01
Objectives: Our objective was to compare the efficacy and safety of three drug combinations for the treatment of late-stage human African trypanosomiasis caused by Trypanosoma brucei gambiense. Design: This trial was a randomized, open-label, active control, parallel clinical trial comparing three arms. Setting: The study took place at the Sleeping Sickness Treatment Center run by Médecins Sans Frontières at Omugo, Arua District, Uganda Participants: Stage 2 patients diagnosed in Northern Uganda were screened for inclusion and a total of 54 selected. Interventions: Three drug combinations were given to randomly assigned patients: melarsoprol-nifurtimox (M+N), melarsoprol-eflornithine (M+E), and nifurtimox-eflornithine (N+E). Dosages were uniform: intravenous (IV) melarsoprol 1.8 mg/kg/d, daily for 10 d; IV eflornithine 400 mg/kg/d, every 6 h for 7 d; oral nifurtimox 15 (adults) or 20 (children <15 y) mg/kg/d, every 8 h for 10 d. Patients were followed up for 24 mo. Outcome Measures: Outcomes were cure rates and adverse events attributable to treatment. Results: Randomization was performed on 54 patients before enrollment was suspended due to unacceptable toxicity in one of the three arms. Cure rates obtained with the intention to treat analysis were M+N 44.4%, M+E 78.9%, and N+E 94.1%, and were significantly higher with N+E (p = 0.003) and M+E (p = 0.045) than with M+N. Adverse events were less frequent and less severe with N+E, resulting in fewer treatment interruptions and no fatalities. Four patients died who were taking melarsoprol-nifurtimox and one who was taking melarsoprol-eflornithine. Conclusions: The N+E combination appears to be a promising first-line therapy that may improve treatment of sleeping sickness, although the results from this interrupted study do not permit conclusive interpretations. Larger studies are needed to continue the evaluation of this drug combination in the treatment of T. b. gambiense sleeping sickness. PMID:17160135
Rintoul, Robert C; Ritchie, Andrew J; Edwards, John G; Waller, David A; Coonar, Aman S; Bennett, Maxine; Lovato, Eleonora; Hughes, Victoria; Fox-Rushby, Julia A; Sharples, Linda D
2014-09-20
Malignant pleural mesothelioma incidence continues to rise, with few available evidence-based therapeutic options. Results of previous non-randomised studies suggested that video-assisted thoracoscopic partial pleurectomy (VAT-PP) might improve symptom control and survival. We aimed to compare efficacy in terms of overall survival, and cost, of VAT-PP and talc pleurodesis in patients with malignant pleural mesothelioma. We undertook an open-label, parallel-group, randomised, controlled trial in patients aged 18 years or older with any subtype of confirmed or suspected mesothelioma with pleural effusion, recruited from 12 hospitals in the UK. Eligible patients were randomly assigned (1:1) to either VAT-PP or talc pleurodesis by computer-generated random numbers, stratified by European Organisation for Research and Treatment of Cancer risk category (high vs low). The primary outcome was overall survival at 1 year, analysed by intention to treat (all patients randomly assigned to a treatment group with a final diagnosis of mesothelioma). This trial is registered with ClinicalTrials.gov, number NCT00821860. Between Oct 24, 2003, and Jan 24, 2012, we randomly assigned 196 patients, of whom 175 (88 assigned to talc pleurodesis, 87 assigned to VAT-PP) had confirmed mesothelioma. Overall survival at 1 year was 52% (95% CI 41-62) in the VAT-PP group and 57% (46-66) in the talc pleurodesis group (hazard ratio 1·04 [95% CI 0·76-1·42]; p=0·81). Surgical complications were significantly more common after VAT-PP than after talc pleurodesis, occurring in 24 (31%) of 78 patients who completed VAT-PP versus ten (14%) of 73 patients who completed talc pleurodesis (p=0·019), as were respiratory complications (19 [24%] vs 11 [15%]; p=0·22) and air-leak beyond 10 days (five [6%] vs one [1%]; p=0·21), although not significantly so. Median hospital stay was longer at 7 days (IQR 5-11) in patients who received VAT-PP compared with 3 days (2-5) for those who received talc pleurodesis (p<0·0001). VAT-PP is not recommended to improve overall survival in patients with pleural effusion due to malignant pleural mesothelioma, and talc pleurodesis might be preferable considering the fewer complications and shorter hospital stay associated with this treatment. BUPA Foundation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mohammedi, Kamel; Potier, Louis; François, Maud; Dardari, Dured; Feron, Marilyne; Nobecourt-Dupuy, Estelle; Dolz, Manuel; Ducloux, Roxane; Chibani, Abdelkader; Eveno, Dominique-François; Crea Avila, Teresa; Sultan, Ariane; Baillet-Blanco, Laurence; Rigalleau, Vincent; Velho, Gilberto; Tubach, Florence; Roussel, Ronan; Dupré, Jean-Claude; Malgrange, Dominique; Marre, Michel
2016-01-01
Off-loading is essential for diabetic foot management, but remains understudied. The evaluation of Off-loading using a new removable oRTHOsis in DIABetic foot (ORTHODIAB) trial aims to evaluate the efficacy of a new removable device "Orthèse Diabète" in the healing of diabetic foot. ORTHODIAB is a French multi-centre randomized, open label trial, with a blinded end points evaluation by an adjudication committee according to the Prospective Randomized Open Blinded End-point. Main endpoints are adjudicated based on the analysis of diabetic foot photographs. Orthèse Diabète is a new removable off-loading orthosis (PROTEOR, France) allowing innovative functions including real-time evaluation of off-loading and estimation of patients' adherence. Diabetic patients with neuropathic plantar ulcer or amputation wounds (toes or transmetatarsal) are assigned to one of 2 parallel-groups: Orthèse Diabète or control group (any removable device) according to a central computer-based randomization. Study visits are scheduled for 6 months (days D7 and D14, and months M1, M2, M3, and M6). The primary endpoint is the proportion of patients whose principal ulcer is healed at M3. Secondary endpoints are: the proportion of patients whose principal ulcer is healed at M1, M2 and M6; the proportion of patients whose initial ulcers are all healed at M1, M2, M3, and M6; principal ulcer area reduction; time-related ulcer-free survival; development of new ulcers; new lower-extremity amputation; infectious complications; off-loading adherence; and patient satisfaction. The study protocol was approved by the French National Agency for Medicines and Health Products Safety, and by the ethics committee of Saint-Louis Hospital (Paris). Comprehensive study information including a Patient Information Sheet has been provided to each patient who must give written informed consent before enrolment. Monitoring, data management, and statistical analyses are providing by UMANIS Life Science (Paris), independently to the sponsor. Since 27/10/2013, 13 centres have agreed to participate in this study, 117 participants were included, and 70 have achieved the study schedules. The study completion is expected for the end of 2016, and the main results will be published in 2017. ORTHODIAB trial evaluates an innovating removable off-loading device, seeking to improve diabetic foot healing (ClinicalTrials.gov identifier: NCT01956162).
Eke, F U; Obamyonyi, A; Eke, N N; Oyewo, E A
2000-02-01
We compared the efficacy and tolerability of oral piroxicam 1 mg/kg/day with soluble aspirin given at 100 mg/kg/day taken four-hourly in 58 patients with sickle cell anaemia and severe ostcoarticular painful attacks requiring hospitalization in a randomized, paralleled study. Main investigational criteria were pain relief, limitation of movement, fever, and insomnia or agitation. Both groups were well-matched at the commencement of therapy but most patients on piroxicam showed remarkable and significant pain relief and improvement in other parameters within 24 h. Unwanted effects were absent in the piroxicam-treated group whereas those treated with aspirin experienced nausea and vomiting. There were no significant changes in liver function tests with both forms of treatment. Oral piroxicam is an effective and safe treatment in the management of the osteoarticular painful crisis in sickle cell anaemia. It might prevent the use of parenteral analgesics and hospitalization and reduce the loss of school hours in patients who are being treated for bone pain crises that characterize sickle cell anaemia.
Dresser, Mark J; Kang, Dongwoo; Staehr, Peter; Gidwani, Shalini; Guo, Cindy; Mulhall, John P; Modi, Nishit B
2006-09-01
Dapoxetine is being developed as a treatment for premature ejaculation and has demonstrated rapid absorption and elimination in previous pharmacokinetic studies. Two open-label studies were conducted in healthy men: a parallel-group pharmacokinetic and safety study in young and elderly men and a randomized crossover food-effect study. Maximal plasma dapoxetine concentrations (C(max)) were similar in young and elderly men (338 and 310 ng/mL, respectively), as were the corresponding area under the plasma concentration versus time curve (AUC) values (2040 and 2280 ng x h/mL, respectively). When coadministered with food, C(max) was reduced by 11% (398 vs 443 ng/mL in the fed and fasted states, respectively), and the peak was delayed by approximately 30 minutes, indicating that food slowed the rate of absorption; however, systemic exposure to dapoxetine (ie, AUC) was not affected by food consumption. Thus, age or consumption of a high-fat meal has only a modest impact on dapoxetine pharmacokinetics in healthy men.
Load Balancing in Stochastic Networks: Algorithms, Analysis, and Game Theory
2014-04-16
SECURITY CLASSIFICATION OF: The classic randomized load balancing model is the so-called supermarket model, which describes a system in which...P.O. Box 12211 Research Triangle Park, NC 27709-2211 mean-field limits, supermarket model, thresholds, game, randomized load balancing REPORT...balancing model is the so-called supermarket model, which describes a system in which customers arrive to a service center with n parallel servers according
Random Number Generation for High Performance Computing
2015-01-01
number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons
ERIC Educational Resources Information Center
Johnson, Mats; Fransson, Gunnar; Östlund, Sven; Areskoug, Björn; Gillberg, Christopher
2017-01-01
Background: Previous research has shown positive effects of Omega 3/6 fatty acids in children with inattention and reading difficulties. We aimed to investigate if Omega 3/6 improved reading ability in mainstream schoolchildren. Methods: We performed a 3-month parallel, randomized, double-blind, placebo-controlled trial followed by 3-month active…
Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice
2016-01-01
The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.
Isiordia-Espinoza, Mario-Alberto; Martinez-Rider, Ricardo; Perez-Urizar, Jose
2016-01-01
Background Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. Material and Methods A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Results Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. Conclusions According to the VAS and AUC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery. Key words:Ketorolac, tramadol, third molar surgery, pain, preemptive analgesia. PMID:27475688
Cantarella, Daniele; Dominguez-Mompell, Ramon; Mallya, Sanjay M; Moschik, Christoph; Pan, Hsin Chuan; Miller, Joseph; Moon, Won
2017-11-01
Mini-implant-assisted rapid palatal expansion (MARPE) appliances have been developed with the aim to enhance the orthopedic effect induced by rapid maxillary expansion (RME). Maxillary Skeletal Expander (MSE) is a particular type of MARPE appliance characterized by the presence of four mini-implants positioned in the posterior part of the palate with bi-cortical engagement. The aim of the present study is to evaluate the MSE effects on the midpalatal and pterygopalatine sutures in late adolescents, using high-resolution CBCT. Specific aims are to define the magnitude and sagittal parallelism of midpalatal suture opening, to measure the extent of transverse asymmetry of split, and to illustrate the possibility of splitting the pterygopalatine suture. Fifteen subjects (mean age of 17.2 years; range, 13.9-26.2 years) were treated with MSE. Pre- and post-treatment CBCT exams were taken and superimposed. A novel methodology based on three new reference planes was utilized to analyze the sutural changes. Parameters were compared from pre- to post-treatment and between genders non-parametrically using the Wilcoxon sign rank test. For the frequency of openings in the lower part of the pterygopalatine suture, the Fisher's exact test was used. Regarding the magnitude of midpalatal suture opening, the split at anterior nasal spine (ANS) and at posterior nasal spine (PNS) was 4.8 and 4.3 mm, respectively. The amount of split at PNS was 90% of that at ANS, showing that the opening of the midpalatal suture was almost perfectly parallel antero-posteriorly. On average, one half of the anterior nasal spine (ANS) moved more than the contralateral one by 1.1 mm. Openings between the lateral and medial plates of the pterygoid process were detectable in 53% of the sutures (P < 0.05). No significant differences were found in the magnitude and frequency of suture opening between males and females. Correlation between age and suture opening was negligible (R 2 range, 0.3-4.2%). Midpalatal suture was successfully split by MSE in late adolescents, and the opening was almost perfectly parallel in a sagittal direction. Regarding the extent of transverse asymmetry of the split, on average one half of ANS moved more than the contralateral one by 1.1 mm. Pterygopalatine suture was split in its lower region by MSE, as the pyramidal process was pulled out from the pterygoid process. Patient gender and age had a negligible influence on suture opening for the age group considered in the study.
An overview of confounding. Part 1: the concept and how to address it.
Howards, Penelope P
2018-04-01
Confounding is an important source of bias, but it is often misunderstood. We consider how confounding occurs and how to address confounding using examples. Study results are confounded when the effect of the exposure on the outcome, mixes with the effects of other risk and protective factors for the outcome. This problem arises when these factors are present to different degrees among the exposed and unexposed study participants, but not all differences between the groups result in confounding. Thinking about an ideal study where all of the population of interest is exposed in one universe and is unexposed in a parallel universe helps to distinguish confounders from other differences. In an actual study, an observed unexposed population is chosen to stand in for the unobserved parallel universe. Differences between this substitute population and the parallel universe result in confounding. Confounding by identified factors can be addressed analytically and through study design, but only randomization has the potential to address confounding by unmeasured factors. Nevertheless, a given randomized study may still be confounded. Confounded study results can lead to incorrect conclusions about the effect of the exposure of interest on the outcome. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
Electromagnetic physics models for parallel computing architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; ...
2016-11-21
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less
Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, Hongzhang; Williams, Samuel; Jong, Wibe de
In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less
Thread-level parallelization and optimization of NWChem for the Intel MIC architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, Hongzhang; Williams, Samuel; de Jong, Wibe
In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less
Enders, Judith; Rief, Matthias; Zimmermann, Elke; Asbach, Patrick; Diederichs, Gerd; Wetz, Christoph; Siebert, Eberhard; Wagner, Moritz; Hamm, Bernd; Dewey, Marc
2013-01-01
The purpose of the present study was to compare the image quality of spinal magnetic resonance (MR) imaging performed on a high-field horizontal open versus a short-bore MR scanner in a randomized controlled study setup. Altogether, 93 (80% women, mean age 53) consecutive patients underwent spine imaging after random assignement to a 1-T horizontal open MR scanner with a vertical magnetic field or a 1.5-T short-bore MR scanner. This patient subset was part of a larger cohort. Image quality was assessed by determining qualitative parameters, signal-to-noise (SNR) and contrast-to-noise ratios (CNR), and quantitative contour sharpness. The image quality parameters were higher for short-bore MR imaging. Regarding all sequences, the relative differences were 39% for the mean overall qualitative image quality, 53% for the mean SNR values, and 34-37% for the quantitative contour sharpness (P<0.0001). The CNR values were also higher for images obtained with the short-bore MR scanner. No sequence was of very poor (nondiagnostic) image quality. Scanning times were significantly longer for examinations performed on the open MR scanner (mean: 32±22 min versus 20±9 min; P<0.0001). In this randomized controlled comparison of spinal MR imaging with an open versus a short-bore scanner, short-bore MR imaging revealed considerably higher image quality with shorter scanning times. ClinicalTrials.gov NCT00715806.
Zimmermann, Elke; Asbach, Patrick; Diederichs, Gerd; Wetz, Christoph; Siebert, Eberhard; Wagner, Moritz; Hamm, Bernd; Dewey, Marc
2013-01-01
Background The purpose of the present study was to compare the image quality of spinal magnetic resonance (MR) imaging performed on a high-field horizontal open versus a short-bore MR scanner in a randomized controlled study setup. Methods Altogether, 93 (80% women, mean age 53) consecutive patients underwent spine imaging after random assignement to a 1-T horizontal open MR scanner with a vertical magnetic field or a 1.5-T short-bore MR scanner. This patient subset was part of a larger cohort. Image quality was assessed by determining qualitative parameters, signal-to-noise (SNR) and contrast-to-noise ratios (CNR), and quantitative contour sharpness. Results The image quality parameters were higher for short-bore MR imaging. Regarding all sequences, the relative differences were 39% for the mean overall qualitative image quality, 53% for the mean SNR values, and 34–37% for the quantitative contour sharpness (P<0.0001). The CNR values were also higher for images obtained with the short-bore MR scanner. No sequence was of very poor (nondiagnostic) image quality. Scanning times were significantly longer for examinations performed on the open MR scanner (mean: 32±22 min versus 20±9 min; P<0.0001). Conclusions In this randomized controlled comparison of spinal MR imaging with an open versus a short-bore scanner, short-bore MR imaging revealed considerably higher image quality with shorter scanning times. Trial Registration ClinicalTrials.gov NCT00715806 PMID:24391767
Hunsaker, Sanita L; Jensen, Chad D
2017-05-01
To determine the effectiveness of a parent health report on fruit and vegetable consumption among preschoolers and kindergarteners. Pre-post open design trial and a randomized controlled trial. A university-sponsored preschool and kindergarten. A total of 63 parents of preschool and kindergarten students participated in the pre-post open design trial and 65 parents participated in the randomized controlled trial. Parents in intervention groups were given a parent health report providing information about their child's fruit and vegetable intake as well as recommendations for how to increase their child's fruit and vegetable consumption. Change in fruit and vegetable consumption. Latent growth curve modeling with Bayesian estimation. Vegetable consumption increased by 0.3 servings/d in the open trial and 0.65 servings/d in the randomized trial. Fruit consumption did not increase significantly in either study. Results from both an open trial and a randomized controlled trial suggested that the parent health report may be a beneficial tool to increase vegetable consumption in preschoolers and kindergarteners. Increases in vegetable consumption can lead to the establishment of lifelong habits of healthy vegetable intake and decrease risk for chronic diseases. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Effect of Inhalation of Lavender Essential Oil on Vital Signs in Open Heart Surgery ICU.
Salamati, Armaiti; Mashouf, Soheyla; Mojab, Faraz
2017-01-01
This study evaluated the effects of inhalation of Lavender essential oil on vital signs in open heart surgery ICU. The main complaint of patients after open-heart surgery is dysrhythmia, tachycardia, and hypertension due to stress and pain. Due to the side effects of chemical drugs, such as opioids, use of non-invasive methods such as aromatherapy for relieving stress and pain parallel to chemical agents could be an important way to decrease the dose and side effects of analgesics. In a multicenter, single-blind trial, 40 patients who had open-heart surgery were recruited. Inclusion criteria were full consciousness, lack of hemorrhage, heart rate >60 beats/min, systolic blood pressure > 100 mmHg, and diastolic blood pressure > 60 mmHg, not using beta blockers in the operating room or ICU, no history of addiction to opioids or use of analgesics in regular, spontaneous breathing ability and not receiving synthetic opioids within 2 h before extubation. Ten minutes after extubation, the patients› vital signs [including BP, HR, Central Venous Pressure (CVP), SPO2, and RR] were measured. Then, a cotton swab, which was impregnated with 2 drops of Lavender essential oil 2%, was placed in patients' oxygen mask and patients breathed for 10 min. Thirty minutes after aromatherapy, the vital signs were measured again. Main objective of this study was the change in vital sign before and after aromatherapy. Statistical significance was accepted for P < 0.05. There was a significant difference in systolic blood pressure (p > 0.001), diastolic blood pressure (p = 0.001), and heart rate (p = 0.03) before and after the intervention using paired t-test. Although, the results did not show any significant difference in respiratory rate (p = 0.1), SpO2 (p = 0.5) and CVP (p = 0.2) before and after inhaling Lavender essential oil. Therefore, the aromatherapy could effectively reduce blood pressure and heart rate in patients admitted to the open heart surgery ICU and can be used as an independent nursing intervention in stabilizing mentioned vital signs. The limitations of our study were sample size and lack of control group. Randomized clinical trials with larger sample size are recommended.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Memory-based frame synchronizer. [for digital communication systems
NASA Technical Reports Server (NTRS)
Stattel, R. J.; Niswander, J. K. (Inventor)
1981-01-01
A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.
Fencing data transfers in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Mamidala, Amith R.
2015-06-02
Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
Blocksome, Michael A.; Mamidala, Amith R.
2013-09-03
Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.
Fencing data transfers in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Mamidala, Amith R.
2015-06-09
Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
Blocksome, Michael A; Mamidala, Amith R
2014-02-11
Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.
2015-06-13
The benefit of CT coronary angiography (CTCA) in patients presenting with stable chest pain has not been systematically studied. We aimed to assess the effect of CTCA on the diagnosis, management, and outcome of patients referred to the cardiology clinic with suspected angina due to coronary heart disease. In this prospective open-label, parallel-group, multicentre trial, we recruited patients aged 18-75 years referred for the assessment of suspected angina due to coronary heart disease from 12 cardiology chest pain clinics across Scotland. We randomly assigned (1:1) participants to standard care plus CTCA or standard care alone. Randomisation was done with a web-based service to ensure allocation concealment. The primary endpoint was certainty of the diagnosis of angina secondary to coronary heart disease at 6 weeks. All analyses were intention to treat, and patients were analysed in the group they were allocated to, irrespective of compliance with scanning. This study is registered with ClinicalTrials.gov, number NCT01149590. Between Nov 18, 2010, and Sept 24, 2014, we randomly assigned 4146 (42%) of 9849 patients who had been referred for assessment of suspected angina due to coronary heart disease. 47% of participants had a baseline clinic diagnosis of coronary heart disease and 36% had angina due to coronary heart disease. At 6 weeks, CTCA reclassified the diagnosis of coronary heart disease in 558 (27%) patients and the diagnosis of angina due to coronary heart disease in 481 (23%) patients (standard care 22 [1%] and 23 [1%]; p<0·0001). Although both the certainty (relative risk [RR] 2·56, 95% CI 2·33-2·79; p<0·0001) and frequency of coronary heart disease increased (1·09, 1·02-1·17; p=0·0172), the certainty increased (1·79, 1·62-1·96; p<0·0001) and frequency seemed to decrease (0·93, 0·85-1·02; p=0·1289) for the diagnosis of angina due to coronary heart disease. This changed planned investigations (15% vs 1%; p<0·0001) and treatments (23% vs 5%; p<0·0001) but did not affect 6-week symptom severity or subsequent admittances to hospital for chest pain. After 1·7 years, CTCA was associated with a 38% reduction in fatal and non-fatal myocardial infarction (26 vs 42, HR 0·62, 95% CI 0·38-1·01; p=0·0527), but this was not significant. In patients with suspected angina due to coronary heart disease, CTCA clarifies the diagnosis, enables targeting of interventions, and might reduce the future risk of myocardial infarction. The Chief Scientist Office of the Scottish Government Health and Social Care Directorates funded the trial with supplementary awards from Edinburgh and Lothian's Health Foundation Trust and the Heart Diseases Research Fund. Copyright © 2015 Newby et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.