Testing for carryover effects after cessation of treatments: a design approach.
Sturdevant, S Gwynn; Lumley, Thomas
2016-08-02
Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover. We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension. Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate. When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation.
Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.
Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M
2010-11-01
Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin
2017-08-17
A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Parent-Child Parallel-Group Intervention for Childhood Aggression in Hong Kong
ERIC Educational Resources Information Center
Fung, Annis L. C.; Tsang, Sandra H. K. M.
2006-01-01
This article reports the original evidence-based outcome study on parent-child parallel group-designed Anger Coping Training (ACT) program for children aged 8-10 with reactive aggression and their parents in Hong Kong. This research program involved experimental and control groups with pre- and post-comparison. Quantitative data collection…
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.;
2003-01-01
This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.
Review of Recent Methodological Developments in Group-Randomized Trials: Part 1—Design
Li, Fan; Gallis, John A.; Prague, Melanie; Murray, David M.
2017-01-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis. PMID:28426295
Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design.
Turner, Elizabeth L; Li, Fan; Gallis, John A; Prague, Melanie; Murray, David M
2017-06-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis.
A CS1 pedagogical approach to parallel thinking
NASA Astrophysics Data System (ADS)
Rague, Brian William
Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.
Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY
2011-11-08
A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes designating a distribution of elements of at least a portion of the array to be executed by specific processors in the multi-dimensional mesh of parallel processors. The pattern of the designating includes a cyclical repetitive pattern of the parallel processor mesh, as modified to have a skew in at least one dimension so that both a row of data in the array and a column of data in the array map to respective contiguous groupings of the processors such that a dimension of the contiguous groupings is greater than one.
Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.
2012-10-23
Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.
Self-designed femoral neck guide pin locator for femoral neck fractures.
Xia, Shengli; Wang, Ziping; Wang, Minghui; Wu, Zuming; Wang, Xiuhui
2014-01-01
Closed reduction and fixation with 3 cannulated screws is a widely accepted surgery for the treatment of femoral neck fractures. However, how to obtain optimal screw placement remains unclear. In the current study, the authors designed a guide pin positioning system for femoral neck fracture cannulated screw fixation and examined its application value by comparing it with freehand guide needle positioning and with general guide pin locator positioning provided by equipment manufacturers. The screw reset rate, screw parallelism, triangle area formed by the link line of the entry point of 3 guide pins, and maximum vertical load bearing of the femoral neck after internal fixation were recorded. As expected, the triangle area was largest in the self-designed positioning group, followed by the general positioning group and the freehand positioning group. The difference among the 3 groups was statistically significant (P<.05). Anteroposterior and lateral radiographs showed that the screws were more parallel in the self-designed positioning group and general positioning group compared with the freehand positioning group (P<.05). The screw reset rate in the self-designed positioning group was significantly lower than that in the general positioning group and the freehand positioning group (P<.05). Maximum bearing load among the 3 groups was equivalent, showing no statistically significant difference (P>.05). The authors’ self-designed guide pin positioning system has the potential to accurately insert cannulated screws in femoral neck fractures and may reduce bone loss and unnecessary radiation.
Gadah, Nouf S; Brunstrom, Jeffrey M; Rogers, Peter J
2016-12-01
The vast majority of preload-test-meal studies that have investigated the effects on energy intake of disguised nutrient or other food/drink ingredient manipulations have used a cross-over design. We argue that this design may underestimate the effect of the manipulation due to carry-over effects. To test this we conducted comparable cross-over (n = 69) and parallel-groups (n = 48) studies testing the effects of sucrose versus low-calorie sweetener (sucralose) in a drink preload on test-meal energy intake. The parallel-groups study included a baseline day in which only the test meal was consumed. Energy intake in that meal was used to control for individual differences in energy intake in the analysis of the effects of sucrose versus sucralose on energy intake on the test day. Consistent with our prediction, the effect of consuming sucrose on subsequent energy intake was greater when measured in the parallel-groups study than in the cross-over study (respectively 64% versus 36% compensation for the 162 kcal difference in energy content of the sucrose and sucralose drinks). We also included a water comparison group in the parallel-groups study (n = 24) and found that test-meal energy intake did not differ significantly between the water and sucralose conditions. Together, these results confirm that consumption of sucrose in a drink reduces subsequent energy intake, but by less than the energy content of the drink, whilst drink sweetness does not increase food energy intake. Crucially, though, the studies demonstrate that study design affects estimated energy compensation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Design, Desire, and Difference
ERIC Educational Resources Information Center
Leander, Kevin M.; Boldt, Gail
2018-01-01
In response to the rise in popularity of concepts of "design" in education research, pedagogy, and curriculum design, in this article we consider how the New London Group conceived of the role of student design practices as an outcome of pedagogy, as well as the parallel role of design in teaching practices. In this descriptive analysis,…
Concurrent Collections (CnC): A new approach to parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knobe, Kathleen
2010-05-07
A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicitymore » or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipment’s Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.« less
Concurrent Collections (CnC): A new approach to parallel programming
Knobe, Kathleen
2018-04-16
A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicity or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipmentâs Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T
2013-01-01
Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
Locality Aware Concurrent Start for Stencil Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.
Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less
Bintivanou, Aimilia; Pissiotis, Argirios; Michalakis, Konstantinos
2017-04-01
Parallel labiolingual walls and the preservation of the cingulum in anterior tooth preparations have been advocated. However, their contribution to retention and resistance form has not been evaluated. The purpose of this in vitro study was to evaluate the retention and resistance failure loads of 2 preparation designs for maxillary anterior teeth. Forty metal restorations were fabricated and paired with 40 cobalt-chromium prepared tooth analogs. Twenty of the specimens had parallel buccolingual walls at the cervical part (group PBLW; the control group), whereas the remaining 20 had converging buccolingual walls (group CBLW; the experimental group). The restorations were cemented to the tooth analogs with a resin-modified glass ionomer luting agent. Ten specimens from each group were subjected to tensile loading with a universal testing machine; the rest were subjected to compression loading until failure. Descriptive statistics and the independent t test (α=.05) were used to determine the effect of failure loads in the tested groups. The independent t test revealed statistically significant differences between the tested groups in tensile loading (P<.001) and in compressive loading (P<.001). The PBLW group presented a higher tensile failure load than the CBLW. On the contrary, the PBLW group presented a smaller compression failure load than the CBLW. Parallelism of the buccolingual axial walls in anterior maxillary teeth increased the retention form but decreased the resistance form. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne
2014-01-01
Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This "preliminary" randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive…
Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.
2010-03-02
Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.
Zúñiga-Reinoso, Álvaro; Méndez, Marco A
2018-04-24
The origin of cryptic species has traditionally been associated with events of recent speciation, genetic constraints, selection of an adaptive character, sexual selection and/or convergent evolution. Species of the genus Callyntra inhabit coastal terraces, mountain slopes, and peaks; their elytral designs are associated with each of these habitats. However, cryptic species have been described within each of these habitats; the taxonomy of this group has been problematic, thus establishing the phylogenetic relationships in this group is fundamental to clarify the systematics and evolutionary patterns of Callyntra. We reconstructed the phylogeny of this group using two mitochondrial genes (COI, 16S) and one nuclear gene (Mp20). We also performed species delimitation using PTP based methods (PTP, mlPTP, bPTP) and GMYC, and evaluated the evolution of the elytral design related to habitat preference. The results showed a tree with five clades, that together with the different methods of species delimitation recovered the described species and suggested at least five new species. The elytral design and habitat preference showed phylogenetic signals. We propose a new classification based on monophyletic groups recovered by phylogenetic analyses. We also suggest that parallel evolution in different habitats and later stasis in the elytral design would be the cause of the origin of cryptic species in this group from central Chile. Copyright © 2018 Elsevier Inc. All rights reserved.
Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.
Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul
2009-02-15
The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models.
A Multiscale Parallel Computing Architecture for Automated Segmentation of the Brain Connectome
Knobe, Kathleen; Newton, Ryan R.; Schlimbach, Frank; Blower, Melanie; Reid, R. Clay
2015-01-01
Several groups in neurobiology have embarked into deciphering the brain circuitry using large-scale imaging of a mouse brain and manual tracing of the connections between neurons. Creating a graph of the brain circuitry, also called a connectome, could have a huge impact on the understanding of neurodegenerative diseases such as Alzheimer’s disease. Although considerably smaller than a human brain, a mouse brain already exhibits one billion connections and manually tracing the connectome of a mouse brain can only be achieved partially. This paper proposes to scale up the tracing by using automated image segmentation and a parallel computing approach designed for domain experts. We explain the design decisions behind our parallel approach and we present our results for the segmentation of the vasculature and the cell nuclei, which have been obtained without any manual intervention. PMID:21926011
The Effects of Using Learning Objects in Two Different Settings
ERIC Educational Resources Information Center
Cakiroglu, Unal; Baki, Adnan; Akkan, Yasar
2012-01-01
The study compared the effects of Learning Objects (LOs) within different applications; in classroom and in extracurricular activities. So in this study, firstly a Learning Object Repository (LOR) has been designed in parallel with 9th grade school mathematics curriculum. One of the two treatment groups was named as "classroom group" (n…
Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.
Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M
2017-07-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.
ERIC Educational Resources Information Center
Wenrich, Tionni R.; Brown, J. Lynne; Wilson, Robin Taylor; Lengerich, Eugene J.
2012-01-01
Objective: To evaluate the effectiveness of a community-based intervention promoting the serving and eating of deep-orange, cruciferous, and dark-green leafy vegetables. Design: Randomized, parallel-group, community-based intervention with a baseline/postintervention/3-month follow-up design. Setting and Participants: Low-income food preparers (n…
ERIC Educational Resources Information Center
Jones, Daniel; Alexa, Melina
As part of the development of a completely sub-symbolic machine translation system, a method for automatically identifying German compounds was developed. Given a parallel bilingual corpus, German compounds are identified along with their English word groupings by statistical processing alone. The underlying principles and the design process are…
DIFFERENTIAL FAULT SENSING CIRCUIT
Roberts, J.H.
1961-09-01
A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.
1986-12-01
17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design
Classroom management of situated group learning: A research study of two teaching strategies
NASA Astrophysics Data System (ADS)
Smeh, Kathy; Fawns, Rod
2000-06-01
Although peer-based work is encouraged by theories in developmental psychology and although classroom interventions suggest it is effective, there are grounds for recognising that young pupils find collaborative learning hard to sustain. Discontinuities in collaborative skill during development have been suggested as one interpretation. Theory and research have neglected situational continuities that the teacher may provide in management of formal and informal collaborations. This experimental study, with the collaboration of the science faculty in one urban secondary college, investigated the effect of two role attribution strategies on communication in peer groups of different gender composition in three parallel Year 8 science classes. The group were set a problem that required them to design an experiment to compare the thermal insulating properties of two different materials. This presents the data collected and key findings, and reviews the findings from previous parallel studies that have employed the same research design in different school settings. The results confirm the effectiveness of social role attribution strategies in teacher management of communication in peer-based work.
Reference datasets for bioequivalence trials in a two-group parallel design.
Fuglsang, Anders; Schütz, Helmut; Labes, Detlew
2015-03-01
In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.
architectures. Crowlely's group has designed and implemented new methods and algorithms specifically for biomass , Crowley developed highly parallel methods for simulations of bio-macromolecules. Affiliated Research advanced sampling methods, Crowley and his team determine free energies such as binding of substrates
The Pasm Parallel Processing System: Design, Simulation, and Image Processing Applications. Volume 1
1989-12-31
was not funded and the author and other members of the PASM group turned their attention to related studies such as the further definition of the... Four such quadrants comprise the set of MCs and PEs. The logical PE i within each MC group is serviced by MSU i. Figure 1.9.3 shows that the MSUs are...instead of four Data Input Queues and with a 4-by-4 instead of a 2-by-2 crossbar switch could be designed. Studies indicate that the perfor- mance of a
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1998-01-01
In this grant, we have proposed a three-year research effort focused on developing High Performance Computation and Communication (HPCC) methodologies for structural analysis on parallel processors and clusters of workstations, with emphasis on reducing the structural design cycle time. Besides consolidating and further improving the FETI solver technology to address plate and shell structures, we have proposed to tackle the following design related issues: (a) parallel coupling and assembly of independently designed and analyzed three-dimensional substructures with non-matching interfaces, (b) fast and smart parallel re-analysis of a given structure after it has undergone design modifications, (c) parallel evaluation of sensitivity operators (derivatives) for design optimization, and (d) fast parallel analysis of mildly nonlinear structures. While our proposal was accepted, support was provided only for one year.
Fault tolerance in a supercomputer through dynamic repartitioning
Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Takken, Todd E.
2007-02-27
A multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
Polarization-dependent thin-film wire-grid reflectarray for terahertz waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tiaoming; School of Information Science and Engineering, Lanzhou University, Lanzhou 730000; Upadhyay, Aditi
2015-07-20
A thin-film polarization-dependent reflectarray based on patterned metallic wire grids is realized at 1 THz. Unlike conventional reflectarrays with resonant elements and a solid metal ground, parallel narrow metal strips with uniform spacing are employed in this design to construct both the radiation elements and the ground plane. For each radiation element, a certain number of thin strips with an identical length are grouped to effectively form a patch resonator with equivalent performance. The ground plane is made of continuous metallic strips, similar to conventional wire-grid polarizers. The structure can deflect incident waves with the polarization parallel to the stripsmore » into a designed direction and transmit the orthogonal polarization component. Measured radiation patterns show reasonable deflection efficiency and high polarization discrimination. Utilizing this flexible device approach, similar reflectarray designs can be realized for conformal mounting onto surfaces of cylindrical or spherical devices for terahertz imaging and communications.« less
Free-air ionization chamber, FAC-IR-300, designed for medium energy X-ray dosimetry
NASA Astrophysics Data System (ADS)
Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.
2017-01-01
The primary standard for X-ray photons is based on parallel-plate free-air ionization chamber (FAC). Therefore, the Atomic Energy Organization of Iran (AEOI) is tried to design and build the free-air ionization chamber, FAC-IR-300, for low and medium energy X-ray dosimetry. The main aim of the present work is to investigate specification of the FAC-IR-300 ionization chamber and design it. FAC-IR-300 dosimeter is composed of two parallel plates, a high voltage (HV) plate and a collector plate, along with a guard electrode that surrounds the collector plate. The guard plate and the collector were separated by an air gap. For obtaining uniformity in the electric field distribution, a group of guard strips was used around the ionization chamber. These characterizations involve determining the exact dimensions of the ionization chamber by using Monte Carlo simulation and introducing correction factors.
The language parallel Pascal and other aspects of the massively parallel processor
NASA Technical Reports Server (NTRS)
Reeves, A. P.; Bruner, J. D.
1982-01-01
A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.
Bay, Esther; Ribbens-Grimm, Christine; Chan, Roxane R
2016-05-01
This clinical methods discursive highlights the development, piloting, and evaluation of two group interventions designed for persons who experienced chronic traumatic brain injury (TBI). Intervention science for this population is limited and lacking in rigor. Our innovative approach to customize existing interventions and develop parallel delivery methods guided by Allostatic Load theory is presented and preliminary results described. Overall, parallel group interventions delivered by trained leaders with mental health expertise were acceptable and feasible for persons who reported being depressed, stressed, and symptomatic. They reported being satisfied with the overall programs and mostly satisfied with the individual classes. Attendance was over the anticipated 70% expected rate and changes in daily living habits were reported by participants. These two group interventions show promise in helping persons to self manage their chronic stress and symptomatology. Copyright © 2015 Elsevier Inc. All rights reserved.
Methodology Series Module 4: Clinical Trials.
Setia, Maninder Singh
2016-01-01
In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an "open trial." However, many of the trials are not open - they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India.
Methodology Series Module 4: Clinical Trials
Setia, Maninder Singh
2016-01-01
In a clinical trial, study participants are (usually) divided into two groups. One group is then given the intervention and the other group is not given the intervention (or may be given some existing standard of care). We compare the outcomes in these groups and assess the role of intervention. Some of the trial designs are (1) parallel study design, (2) cross-over design, (3) factorial design, and (4) withdrawal group design. The trials can also be classified according to the stage of the trial (Phase I, II, III, and IV) or the nature of the trial (efficacy vs. effectiveness trials, superiority vs. equivalence trials). Randomization is one of the procedures by which we allocate different interventions to the groups. It ensures that all the included participants have a specified probability of being allocated to either of the groups in the intervention study. If participants and the investigator know about the allocation of the intervention, then it is called an “open trial.” However, many of the trials are not open – they are blinded. Blinding is useful to minimize bias in clinical trials. The researcher should familiarize themselves with the CONSORT statement and the appropriate Clinical Trials Registry of India. PMID:27512184
Working Memory Training: Improving Intelligence--Changing Brain Activity
ERIC Educational Resources Information Center
Jausovec, Norbert; Jausovec, Ksenija
2012-01-01
The main objectives of the study were: to investigate whether training on working memory (WM) could improve fluid intelligence, and to investigate the effects WM training had on neuroelectric (electroencephalography--EEG) and hemodynamic (near-infrared spectroscopy--NIRS) patterns of brain activity. In a parallel group experimental design,…
Effect of Piracetam on Dyslexic's Reading Ability.
ERIC Educational Resources Information Center
Wilsher, C.; And Others
1985-01-01
Forty-six dyslexic boys (aged eight to 13) were administered Piracetam or placebo in a double-blind, parallel experiment. Although, overall, there were no significant group effects, the within-subject design revealed improvements in reading speed and accuracy in Piracetam Ss. Dyslexics with higher reading ages improved significantly compared to…
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Implementation of Sensor and Control Designs for Bioregenerative Systems
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro R. (Editor)
1990-01-01
The goal of the Spring 1990 EGM 4001 Design class was to design, fabricate, and test sensors and control systems for a closed loop life support system (CLLSS). The designs investigated were to contribute to the development of NASA's Controlled Ecological Life Support System (CELSS) at Kennedy Space Center (KSC). Designs included a seed moisture content sensor, a porous medium wetness sensor, a plant health sensor, and a neural network control system. The seed group focused on the design and implementation of a sensor that could detect the moisture content of a seed batch. The porous medium wetness group concentrated on the development of a sensor to monitor the amount of nutrient solution within a porous plate incorporating either infrared reflectance or thermal conductance properties. The plant health group examined the possibility of remotely monitoring the health of the plants within the Biomass Production Chamber (BPC) using infrared reflectance properties. Finally, the neural network group concentrated on the ability to use parallel processing in order to control a robot arm and analyze the data from the health sensor to detect regions of a plant.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Using algebra for massively parallel processor design and utilization
NASA Technical Reports Server (NTRS)
Campbell, Lowell; Fellows, Michael R.
1990-01-01
This paper summarizes the author's advances in the design of dense processor networks. Within is reported a collection of recent constructions of dense symmetric networks that provide the largest know values for the number of nodes that can be placed in a network of a given degree and diameter. The constructions are in the range of current potential engineering significance and are based on groups of automorphisms of finite-dimensional vector spaces.
ERIC Educational Resources Information Center
Pateman, Neil A., Ed.; Dougherty, Barbara J., Ed.; Zilliox, Joseph T., Ed.
2003-01-01
This volume of the 27th International Group for the Psychology of Mathematics Education Conference presents the following research reports: (1) Text Talk, Body Talk, Table Talk: A Design of Ratio and Proportion as Classroom Parallel Events (Dor Abrahamson); (2) Generalizing the Context and Generalising the Calculation (Janet Ainley); (3) Interview…
Lee, Tso-Ying; Chang, Shih-Chin; Chu, Hsin; Yang, Chyn-Yng; Ou, Keng-Liang; Chung, Min-Huey; Chou, Kuei-Ru
2013-11-01
In this study, we investigated the effects of group assertiveness training on assertiveness, social anxiety and satisfaction with interpersonal communication among patients with chronic schizophrenia. Only limited studies highlighted the effectiveness of group assertiveness training among inpatients with schizophrenia. Given the lack of group assertiveness training among patients with schizophrenia, further development of programmes focusing on facilitating assertiveness, self-confidence and social skills among inpatients with chronic schizophrenia is needed. This study used a prospective, randomized, single-blinded, parallel-group design. This study employed a prospective, randomized, parallel-group design. Seventy-four patients were randomly assigned to experimental group receiving 12 sessions of assertiveness training, or a supportive control group. Data collection took place for the period of June 2009-July 2010. Among patients with chronic schizophrenia, assertiveness, levels of social anxiety and satisfaction with interpersonal communication significantly improved immediately after the intervention and at the 3-month follow-up in the intervention group. The results of a generalized estimating equation (GEE) indicated that: (1) assertiveness significantly improved from pre- to postintervention and was maintained until the follow-up; (2) anxiety regarding social interactions significantly decreased after assertiveness training; and (3) satisfaction with interpersonal communication slightly improved after the 12-session intervention and at the 3-month follow-up. Assertivenss training is a non-invasive and inexpensive therapy that appears to improve assertiveness, social anxiety and interpersonal communication among inpatients with chronic schizophrenia. These findings may provide a reference guide to clinical nurses for developing assertiveness-training protocols. © 2013 Blackwell Publishing Ltd.
ERIC Educational Resources Information Center
Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark
2010-01-01
Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
Kirschvink, J L
1992-01-01
A common mistake in biomagnetic experimentation is the assumption that Helmholtz coils provide uniform magnetic fields; this is true only for a limited volume at their center. Substantial improvements on this design have been made during the past 140 years with systems of three, four, and five coils. Numerical comparisons of the field uniformity generated by these designs are made here, along with a table of construction details and recommendations for their use in experiments in which large volumes of uniform intensity magnetic exposures are needed. Double-wrapping, or systems of bifilar windings, can also help control for the non-magnetic effects of the electric coils used in many experiments. In this design, each coil is wrapped in parallel with two separate, adjacent strands of copper wire, rather than the single strand used normally. If currents are flowing in antiparallel directions, the magnetic fields generated by each strand will cancel and yield virtually no external magnetic field, whereas parallel currents will yield an external field. Both cases will produce similar non-magnetic effects of ohmic heating, and simple measures can reduce the small vibration and electric field differences. Control experiments can then be designed such that the only major difference between treated and untreated groups is the presence or absence of the magnetic field. Double-wrapped coils also facilitate the use of truly double-blind protocol, as the same apparatus can be used either for experimental or control groups.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
A Parallel Trade Study Architecture for Design Optimization of Complex Systems
NASA Technical Reports Server (NTRS)
Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.
Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT
NASA Astrophysics Data System (ADS)
Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao
2015-10-01
Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.
Parallel and Serial Grouping of Image Elements in Visual Perception
ERIC Educational Resources Information Center
Houtkamp, Roos; Roelfsema, Pieter R.
2010-01-01
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun
2016-10-01
The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.
ERIC Educational Resources Information Center
Sezer, Adem; Inel, Yusuf; Seçkin, Ahmet Çagdas; Uluçinar, Ufuk
2017-01-01
This study aimed to detect any relationship that may exist between classroom teacher candidates' class participation and their attention levels. The research method was a convergent parallel design, mixing quantitative and qualitative research techniques, and the study group was composed of 21 freshmen studying in the Classroom Teaching Department…
The Islamic State and U.S. Policy
2016-06-14
has conducted operations against the group in Iraq, Syria, and Libya. Parallel U.S. diplomatic efforts are designed to promote political...governments in support of those governments’ operations against Islamic State affiliates. Evolving counterterrorism cooperation and intelligence sharing...attacks. The interdependent nature of conflicts and political crises in Iraq, Syria, and other countries where the Islamic State operates complicates
Supervised Home Training of Dialogue Skills in Chronic Aphasia: A Randomized Parallel Group Study
ERIC Educational Resources Information Center
Nobis-Bosch, Ruth; Springer, Luise; Radermacher, Irmgard; Huber, Walter
2011-01-01
Purpose: The aim of this study was to prove the efficacy of supervised self-training for individuals with aphasia. Linguistic and communicative performance in structured dialogues represented the main study parameters. Method: In a cross-over design for randomized matched pairs, 18 individuals with chronic aphasia were examined during 12 weeks of…
ERIC Educational Resources Information Center
O'Callaghan, Paul; McMullen, John; Shannon, Ciaran; Rafferty, Harry; Black, Alastair
2013-01-01
Objective: To assess the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) delivered by nonclinical facilitators in reducing posttraumatic stress, depression, and anxiety and conduct problems and increasing prosocial behavior in a group of war-affected, sexually exploited girls in a single-blind, parallel-design, randomized,…
Interventions to Reduce Distress in Adult Victims of Rape and Sexual Violence: A Systematic Review
ERIC Educational Resources Information Center
Regehr, Cheryl; Alaggia, Ramona; Dennis, Jane; Pitts, Annabel; Saini, Michael
2013-01-01
Objectives: This article presents a systematic evaluation of the effectiveness of interventions aimed at reducing distress in adult victims of rape and sexual violence. Method: Studies were eligible for the review if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design. Results:…
Parallel Ray Tracing Using the Message Passing Interface
2007-09-01
software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing
Unique Study Designs in Nephrology: N-of-1 Trials and Other Designs.
Samuel, Joyce P; Bell, Cynthia S
2016-11-01
Alternatives to the traditional parallel-group trial design may be required to answer clinical questions in special populations, rare conditions, or with limited resources. N-of-1 trials are a unique trial design which can inform personalized evidence-based decisions for the patient when data from traditional clinical trials are lacking or not generalizable. A concise overview of factorial design, cluster randomization, adaptive designs, crossover studies, and n-of-1 trials will be provided along with pertinent examples in nephrology. The indication for analysis strategies such as equivalence and noninferiority trials will be discussed, as well as analytic pitfalls. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
A Comparison of Parallelism in Interface Designs for Computer-Based Learning Environments
ERIC Educational Resources Information Center
Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans
2004-01-01
In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants completed an abstract task. The five basic designs…
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade
NASA Astrophysics Data System (ADS)
Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.
2010-07-01
The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.
Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
Aerostructural analysis and design optimization of composite aircraft
NASA Astrophysics Data System (ADS)
Kennedy, Graeme James
High-performance composite materials exhibit both anisotropic strength and stiffness properties. These anisotropic properties can be used to produce highly-tailored aircraft structures that meet stringent performance requirements, but these properties also present unique challenges for analysis and design. New tools and techniques are developed to address some of these important challenges. A homogenization-based theory for beams is developed to accurately predict the through-thickness stress and strain distribution in thick composite beams. Numerical comparisons demonstrate that the proposed beam theory can be used to obtain highly accurate results in up to three orders of magnitude less computational time than three-dimensional calculations. Due to the large finite-element model requirements for thin composite structures used in aerospace applications, parallel solution methods are explored. A parallel direct Schur factorization method is developed. The parallel scalability of the direct Schur approach is demonstrated for a large finite-element problem with over 5 million unknowns. In order to address manufacturing design requirements, a novel laminate parametrization technique is presented that takes into account the discrete nature of the ply-angle variables, and ply-contiguity constraints. This parametrization technique is demonstrated on a series of structural optimization problems including compliance minimization of a plate, buckling design of a stiffened panel and layup design of a full aircraft wing. The design and analysis of composite structures for aircraft is not a stand-alone problem and cannot be performed without multidisciplinary considerations. A gradient-based aerostructural design optimization framework is presented that partitions the disciplines into distinct process groups. An approximate Newton-Krylov method is shown to be an efficient aerostructural solution algorithm and excellent parallel scalability of the algorithm is demonstrated. An induced drag optimization study is performed to compare the trade-off between wing weight and induced drag for wing tip extensions, raked wing tips and winglets. The results demonstrate that it is possible to achieve a 43% induced drag reduction with no weight penalty, a 28% induced drag reduction with a 10% wing weight reduction, or a 20% wing weight reduction with a 5% induced drag penalty from a baseline wing obtained from a structural mass-minimization problem with fixed aerodynamic loads.
Eduardoff, M; Gross, T E; Santos, C; de la Puente, M; Ballard, D; Strobl, C; Børsting, C; Morling, N; Fusco, L; Hussing, C; Egyed, B; Souto, L; Uacyisrael, J; Syndercombe Court, D; Carracedo, Á; Lareu, M V; Schneider, P M; Parson, W; Phillips, C; Parson, W; Phillips, C
2016-07-01
The EUROFORGEN Global ancestry-informative SNP (AIM-SNPs) panel is a forensic multiplex of 128 markers designed to differentiate an individual's ancestry from amongst the five continental population groups of Africa, Europe, East Asia, Native America, and Oceania. A custom multiplex of AmpliSeq™ PCR primers was designed for the Global AIM-SNPs to perform massively parallel sequencing using the Ion PGM™ system. This study assessed individual SNP genotyping precision using the Ion PGM™, the forensic sensitivity of the multiplex using dilution series, degraded DNA plus simple mixtures, and the ancestry differentiation power of the final panel design, which required substitution of three original ancestry-informative SNPs with alternatives. Fourteen populations that had not been previously analyzed were genotyped using the custom multiplex and these studies allowed assessment of genotyping performance by comparison of data across five laboratories. Results indicate a low level of genotyping error can still occur from sequence misalignment caused by homopolymeric tracts close to the target SNP, despite careful scrutiny of candidate SNPs at the design stage. Such sequence misalignment required the exclusion of component SNP rs2080161 from the Global AIM-SNPs panel. However, the overall genotyping precision and sensitivity of this custom multiplex indicates the Ion PGM™ assay for the Global AIM-SNPs is highly suitable for forensic ancestry analysis with massively parallel sequencing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Integrated Task And Data Parallel Programming: Language Design
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; West, Emily A.
1998-01-01
his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
Parallel machine architecture and compiler design facilities
NASA Technical Reports Server (NTRS)
Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex
1990-01-01
The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.
Parallel Treatments Design: A Nested Single Subject Design for Comparing Instructional Procedures.
ERIC Educational Resources Information Center
Gast, David L.; Wolery, Mark
1988-01-01
This paper describes the parallel treatments design, a nested single subject experimental design that combines two concurrently implemented multiple probe designs, allows control for effects of extraneous variables through counterbalancing, and replicates its effects across behaviors. Procedural guidelines for the design's use and issues related…
Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.
1996-04-01
This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In
Reconfigurable Model Execution in the OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Hwang, John T.
2017-01-01
NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.
Final Report: PAGE: Policy Analytics Generation Engine
2016-08-12
develop a parallel framework for it. We also developed policies and methods by which a group of defensive resources (e.g. checkpoints) could be...Sarit Kraus. Learning to Reveal Information in Repeated Human -Computer Negotiation, Human -Agent Interaction Design and Models Workshop 2012. 04-JUN...Joseph Keshet, Sarit Kraus. Predicting Human Strategic Decisions Using Facial Expressions, International Joint Conference on Artificial
ERIC Educational Resources Information Center
Goldstein, Howard; Lackey, Kimberly C.; Schneider, Naomi J. B.
2014-01-01
This review presents a novel framework for evaluating evidence based on a set of parallel criteria that can be applied to both group and single-subject experimental design (SSED) studies. The authors illustrate use of this evaluation system in a systematic review of 67 articles investigating social skills interventions for preschoolers with autism…
Beyond the treatment effect: Evaluating the effects of patient preferences in randomised trials.
Walter, S D; Turner, R; Macaskill, P; McCaffery, K J; Irwig, L
2017-02-01
The treatments under comparison in a randomised trial should ideally have equal value and acceptability - a position of equipoise - to study participants. However, it is unlikely that true equipoise exists in practice, because at least some participants may have preferences for one treatment or the other, for a variety of reasons. These preferences may be related to study outcomes, and hence affect the estimation of the treatment effect. Furthermore, the effects of preferences can sometimes be substantial, and may even be larger than the direct effect of treatment. Preference effects are of interest in their own right, but they cannot be assessed in the standard parallel group design for a randomised trial. In this paper, we describe a model to represent the impact of preferences on trial outcomes, in addition to the usual treatment effect. In particular, we describe how outcomes might differ between participants who would choose one treatment or the other, if they were free to do so. Additionally, we investigate the difference in outcomes depending on whether or not a participant receives his or her preferred treatment, which we characterise through a so-called preference effect. We then discuss several study designs that have been proposed to measure and exploit data on preferences, and which constitute alternatives to the conventional parallel group design. Based on the model framework, we determine which of the various preference effects can or cannot be estimated with each design. We also illustrate these ideas with some examples of preference designs from the literature.
Parallel and serial grouping of image elements in visual perception.
Houtkamp, Roos; Roelfsema, Pieter R
2010-12-01
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.
Multilayer perceptron architecture optimization using parallel computing techniques.
Castro, Wilson; Oblitas, Jimy; Santa-Cruz, Roberto; Avila-George, Himer
2017-01-01
The objective of this research was to develop a methodology for optimizing multilayer-perceptron-type neural networks by evaluating the effects of three neural architecture parameters, namely, number of hidden layers (HL), neurons per hidden layer (NHL), and activation function type (AF), on the sum of squares error (SSE). The data for the study were obtained from quality parameters (physicochemical and microbiological) of milk samples. Architectures or combinations were organized in groups (G1, G2, and G3) generated upon interspersing one, two, and three layers. Within each group, the networks had three neurons in the input layer, six neurons in the output layer, three to twenty-seven NHL, and three AF (tan-sig, log-sig, and linear) types. The number of architectures was determined using three factorial-type experimental designs, which reached 63, 2 187, and 50 049 combinations for G1, G2 and G3, respectively. Using MATLAB 2015a, a logical sequence was designed and implemented for constructing, training, and evaluating multilayer-perceptron-type neural networks using parallel computing techniques. The results show that HL and NHL have a statistically relevant effect on SSE, and from two hidden layers, AF also has a significant effect; thus, both AF and NHL can be evaluated to determine the optimal combination per group. Moreover, in the three study groups, it is observed that there is an inverse relationship between the number of processors and the total optimization time.
Multilayer perceptron architecture optimization using parallel computing techniques
Castro, Wilson; Oblitas, Jimy; Santa-Cruz, Roberto; Avila-George, Himer
2017-01-01
The objective of this research was to develop a methodology for optimizing multilayer-perceptron-type neural networks by evaluating the effects of three neural architecture parameters, namely, number of hidden layers (HL), neurons per hidden layer (NHL), and activation function type (AF), on the sum of squares error (SSE). The data for the study were obtained from quality parameters (physicochemical and microbiological) of milk samples. Architectures or combinations were organized in groups (G1, G2, and G3) generated upon interspersing one, two, and three layers. Within each group, the networks had three neurons in the input layer, six neurons in the output layer, three to twenty-seven NHL, and three AF (tan-sig, log-sig, and linear) types. The number of architectures was determined using three factorial-type experimental designs, which reached 63, 2 187, and 50 049 combinations for G1, G2 and G3, respectively. Using MATLAB 2015a, a logical sequence was designed and implemented for constructing, training, and evaluating multilayer-perceptron-type neural networks using parallel computing techniques. The results show that HL and NHL have a statistically relevant effect on SSE, and from two hidden layers, AF also has a significant effect; thus, both AF and NHL can be evaluated to determine the optimal combination per group. Moreover, in the three study groups, it is observed that there is an inverse relationship between the number of processors and the total optimization time. PMID:29236744
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
de Vreede, Gert-Jan; Briggs, Robert O; Reiter-Palmon, Roni
2010-04-01
The aim of this study was to compare the results of two different modes of using multiple groups (instead of one large group) to identify problems and develop solutions. Many of the complex problems facing organizations today require the use of very large groups or collaborations of groups from multiple organizations. There are many logistical problems associated with the use of such large groups, including the ability to bring everyone together at the same time and location. A field study involved two different organizations and compared productivity and satisfaction of group. The approaches included (a) multiple small groups, each completing the entire process from start to end and combining the results at the end (parallel mode); and (b) multiple subgroups, each building on the work provided by previous subgroups (serial mode). Groups using the serial mode produced more elaborations compared with parallel groups, whereas parallel groups produced more unique ideas compared with serial groups. No significant differences were found related to satisfaction with process and outcomes between the two modes. Preferred mode depends on the type of task facing the group. Parallel groups are more suited for tasks for which a variety of new ideas are needed, whereas serial groups are best suited when elaboration and in-depth thinking on the solution are required. Results of this research can guide the development of facilitated sessions of large groups or "teams of teams."
Berendsen, Agnes; Santoro, Aurelia; Pini, Elisa; Cevenini, Elisa; Ostan, Rita; Pietruszka, Barbara; Rolf, Katarzyna; Cano, Noël; Caille, Aurélie; Lyon-Belgy, Noëlle; Fairweather-Tait, Susan; Feskens, Edith; Franceschi, Claudio; de Groot, C P G M
2013-01-01
The proportion of European elderly is expected to increase to 30% in 2060. Combining dietary components may modulate many processes involved in ageing. So, it is likely that a healthful diet approach might have greater favourable impact on age-related decline than individual dietary components. This paper describes the design of a healthful diet intervention on inflammageing and its consequences in the elderly. The NU-AGE study is a parallel randomized one-year trial in 1250 apparently healthy, independently living European participants aged 65-80 years. Participants are randomised into either the diet group or control group. Participants in the diet group received dietary advice aimed at meeting the nutritional requirements of the ageing population. Special attention was paid to nutrients that may be inadequate or limiting in diets of elderly, such as vitamin D, vitamin B12, and calcium. C-reactive protein is measured as primary outcome. The NU-AGE study is the first dietary intervention investigating the effect of a healthful diet providing targeted nutritional recommendations for optimal health and quality of life in apparently healthy European elderly. Results of this intervention will provide evidence on the effect of a healthful diet on the prevention of age related decline. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
National Combustion Code: Parallel Implementation and Performance
NASA Technical Reports Server (NTRS)
Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.
2000-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Broadcasting a message in a parallel computer
Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN
2011-08-02
Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.
Optimality, sample size, and power calculations for the sequential parallel comparison design.
Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A
2011-10-15
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
1994-01-01
CESDIS, the Center of Excellence in Space Data and Information Sciences was developed jointly by NASA, Universities Space Research Association (USRA), and the University of Maryland in 1988 to focus on the design of advanced computing techniques and data systems to support NASA Earth and space science research programs. CESDIS is operated by USRA under contract to NASA. The Director, Associate Director, Staff Scientists, and administrative staff are located on-site at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The primary CESDIS mission is to increase the connection between computer science and engineering research programs at colleges and universities and NASA groups working with computer applications in Earth and space science. Research areas of primary interest at CESDIS include: 1) High performance computing, especially software design and performance evaluation for massively parallel machines; 2) Parallel input/output and data storage systems for high performance parallel computers; 3) Data base and intelligent data management systems for parallel computers; 4) Image processing; 5) Digital libraries; and 6) Data compression. CESDIS funds multiyear projects at U. S. universities and colleges. Proposals are accepted in response to calls for proposals and are selected on the basis of peer reviews. Funds are provided to support faculty and graduate students working at their home institutions. Project personnel visit Goddard during academic recess periods to attend workshops, present seminars, and collaborate with NASA scientists on research projects. Additionally, CESDIS takes on specific research tasks of shorter duration for computer science research requested by NASA Goddard scientists.
Influence of post pattern and resin cement curing mode on the retention of glass fibre posts.
Poskus, L T; Sgura, R; Paragó, F E M; Silva, E M; Guimarães, J G A
2010-04-01
To evaluate the influence of post design and roughness and cement system (dual- or self-cured) on the retention of glass fibre posts. Two tapered and smooth posts (Exacto Cônico No. 2 and White Post No. 1) and two parallel-sided and serrated posts (Fibrekor 1.25 mm and Reforpost No. 2) were adhesively luted with two different resin cements--a dual-cured (Rely-X ARC) and a self-cured (Cement Post)--in 40 single-rooted teeth. The teeth were divided into eight experimental groups (n = 5): PFD--Parallel-serrated-Fibrekor/dual-cured; PRD--Parallel-serrated-Reforpost/dual-cured; TED--Tapered-smooth-Exacto Cônico/dual-cured; TWD--Tapered-smooth-White Post/dual-cured; PFS--Parallel-serrated-Fibrekor/self-cured; PRS--Parallel-serrated-Reforpost/self-cured; TES--Tapered-smooth-Exacto Cônico/self-cured; TWS--Tapered-smooth-White Post/self-cured. The specimens were submitted to a pull-out test at a crosshead speed of 0.5 mm min(-1). Data were analysed using analysis of variance and Bonferroni's multiple comparison test (alpha = 0.05). Pull-out results (MPa) were: PFD = 8.13 (+/-1.71); PRD = 8.30 (+/-0.46); TED = 8.68 (+/-1.71); TWD = 9.35 (+/-1.99); PFS = 8.54 (+/-2.23); PRS = 7.09 (+/-1.96); TES = 8.27 (+/-3.92); TWS = 7.57 (+/-2.35). No statistical significant difference was detected for posts and cement factors and their interaction. The retention of glass fibre posts was not affected by post design or surface roughness nor by resin cement-curing mode. These results imply that the choice for serrated posts and self-cured cements is not related to an improvement in retention.
A mixed method pilot study: the researchers' experiences.
Secomb, Jacinta M; Smith, Colleen
2011-08-01
This paper reports on the outcomes of a small well designed pilot study. Pilot studies often disseminate limited or statistically meaningless results without adding to the body knowledge on the comparative research benefits. The design a pre-test post-test group parallel randomised control trial and inductive content analysis of focus group transcripts was tested specifically to increase outcomes in a proposed larger study. Strategies are now in place to overcome operational barriers and recruitment difficulties. Links between the qualitative and quantitative arms of the proposed larger study have been made; it is anticipated that this will add depth to the final report. More extensive reporting on the outcomes of pilot studies would assist researchers and increase the body of knowledge in this area.
Parallel integer sorting with medium and fine-scale parallelism
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
Poudel, Lokendra; Steinmetz, Nicole F; French, Roger H; Parsegian, V Adrian; Podgornik, Rudolf; Ching, Wai-Yim
2016-08-03
We present a first-principles density functional study elucidating the effects of solvent, metal ions and topology on the electronic structure and hydrogen bonding of 12 well-designed three dimensional G-quadruplex (G4-DNA) models in different environments. Our study shows that the parallel strand structures are more stable in dry environments and aqueous solutions containing K(+) ions within the tetrad of guanine but conversely, that the anti-parallel structure is more stable in solutions containing the Na(+) ions within the tetrad of guanine. The presence of metal ions within the tetrad of the guanine channel always enhances the stability of the G4-DNA models. The parallel strand structures have larger HOMO-LUMO gaps than antiparallel structures, which are in the range of 0.98 eV to 3.11 eV. Partial charge calculations show that sugar and alkali ions are positively charged whereas nucleobases, PO4 groups and water molecules are all negatively charged. Partial charges on each functional group with different signs and magnitudes contribute differently to the electrostatic interactions involving G4-DNA and favor the parallel structure. A comparative study between specific pairs of different G4-DNA models shows that the Hoogsteen OH and NH hydrogen bonds in the guanine tetrad are significantly influenced by the presence of metal ions and water molecules, collectively affecting the structure and the stability of G4-DNA.
Design considerations for parallel graphics libraries
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1994-01-01
Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.
2016-05-11
AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386
Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael
2017-11-01
Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.
The specificity of learned parallelism in dual-memory retrieval.
Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy
2014-05-01
Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.
Type synthesis for 4-DOF parallel press mechanism using GF set theory
NASA Astrophysics Data System (ADS)
He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong
2015-07-01
Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
Augmenting The HST Pure Parallel Observations
NASA Astrophysics Data System (ADS)
Patterson, Alan; Soutchkova, G.; Workman, W.
2012-05-01
Pure Parallel (PP) programs, designated GO/PAR, are a subgroup of General Observer (GO) programs. PP execute simultaneously with prime GO observations to which they are "attached". The PP observations can be performed with ACS/WFC, WFC3/UVIS or WFC3/IR and can be attached only to GO visits in which the instruments are either COS or STIS. The current HST Parallel Observation Processing System (POPS) was introduced after the Servicing Mission 4. It increased the HST productivity by 10% in terms of the utilization of HST prime orbits and was highly appreciated by the HST observers, allowing them to design efficient, multi-orbit survey projects for collecting large amounts of data on identifiable targets. The results of the WFC3 Infrared Spectroscopic Parallel Survey (WISP), Hubble Infrared Pure Parallel Imaging Extragalactic Survey (HIPPIES), and The Brightest-of-Reionizing Galaxies Pure Parallel Survey (BoRG) exemplify this benefit. In Cycle 19, however, the full advantage of GO/PARs came under risk. Whereas each of the previous cycles provided over one million seconds of exposure time for PP, in Cycle 19 that number reduced to 680,000 seconds. This dramatic decline occurred because of fundamental changes in the construction of COS prime observations. To preserve the science output of PP, the PP Working Group was tasked to find a way to recover the lost time and maximize the total time available for PP observing. The solution was to expand the definition of a PP opportunity to allow PP exposures to span one or more primary exposure readouts. So starting in HST Cycle 20, PP opportunities will no longer be limited to GO visits with a single uninterrupted exposure in an orbit. The resulting enhancements in HST Cycle 20 to the PP opportunity identification and matching process are expected to restore the PP time to previously achieved and possibly even greater levels.
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.
Supercomputing on massively parallel bit-serial architectures
NASA Technical Reports Server (NTRS)
Iobst, Ken
1985-01-01
Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.
Gooding, Owen W
2004-06-01
The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.
PUP: An Architecture to Exploit Parallel Unification in Prolog
1988-03-01
environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel
Choi, Ji-Young; Paik, Doo-Jin; Kwon, Dae Young; Park, Yongsoon
2014-04-22
The purpose of this study was to investigate the hypothesis that dietary supplementation with rice bran fermented with Lentinus edodes (rice bran exo-biopolymer, RBEP), a substance known to contain arabinoxylan, enhances natural killer (NK) cell activity and modulates cytokine production in healthy adults. This study was designed in a randomized, double-blind, placebo-controlled, and parallel-group format. Eighty healthy participants with white blood cell counts of 4,000-8,000 cells/μL were randomly assigned to take six capsules per day of either 3 g RBEP or 3 g placebo for 8 weeks. Three participants in the placebo group were excluded after initiation of the protocol; no severe adverse effects from RBEP supplementation were reported. NK cell activity of peripheral blood mononuclear cells was measured using nonradioactive cytotoxicity assay kits and serum cytokine concentrations included interferon (IFN)-γ, tumor necrosis factor (TNF)-α, interleukin (IL)-2, IL-4, IL-10, and IL-12 were measured by Bio-Plex cytokine assay kit. This study was registered with the Clinical Research Information Service (KCT0000536). Supplementation of RBEP significantly increased IFN-γ production compared with the placebo group (P = 0.012). However, RBEP supplementation did not affect either NK cell activity or cytokine levels, including IL-2, IL-4, IL-10, IL-12, and TNF-α, compared with the placebo group. The data obtained in this study indicate that RBEP supplementation increases IFN-γ secretion without causing significant adverse effects, and thus may be beneficial to healthy individuals. This new rice bran-derived product may therefore be potentially useful to include in the formulation of solid and liquid foods designed for treatment and prevention of pathological states associated with defective immune responses.
Setsompop, Kawin; Alagappan, Vijayanand; Gagoski, Borjan; Witzel, Thomas; Polimeni, Jonathan; Potthast, Andreas; Hebrank, Franz; Fontius, Ulrich; Schmitt, Franz; Wald, Lawrence L; Adalsteinsson, Elfar
2008-12-01
Slice-selective RF waveforms that mitigate severe B1+ inhomogeneity at 7 Tesla using parallel excitation were designed and validated in a water phantom and human studies on six subjects using a 16-element degenerate stripline array coil driven with a butler matrix to utilize the eight most favorable birdcage modes. The parallel RF waveform design applied magnitude least-squares (MLS) criteria with an optimized k-space excitation trajectory to significantly improve profile uniformity compared to conventional least-squares (LS) designs. Parallel excitation RF pulses designed to excite a uniform in-plane flip angle (FA) with slice selection in the z-direction were demonstrated and compared with conventional sinc-pulse excitation and RF shimming. In all cases, the parallel RF excitation significantly mitigated the effects of inhomogeneous B1+ on the excitation FA. The optimized parallel RF pulses for human B1+ mitigation were only 67% longer than a conventional sinc-based excitation, but significantly outperformed RF shimming. For example the standard deviations (SDs) of the in-plane FA (averaged over six human studies) were 16.7% for conventional sinc excitation, 13.3% for RF shimming, and 7.6% for parallel excitation. This work demonstrates that excitations with parallel RF systems can provide slice selection with spatially uniform FAs at high field strengths with only a small pulse-duration penalty. (c) 2008 Wiley-Liss, Inc.
Integrated Task and Data Parallel Programming
NASA Technical Reports Server (NTRS)
Grimshaw, A. S.
1998-01-01
This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
NASA Astrophysics Data System (ADS)
Kumar, Sumit; Das, Aloke
2013-06-01
Non-covalent interactions play a key role in governing the specific functional structures of biomolecules as well as materials. Thus molecular level understanding of these intermolecular interactions can help in efficient drug design and material synthesis. It has been found from X-ray crystallography that pure hydrocarbon solids (i.e. benzene, hexaflurobenzene) have mostly slanted T-shaped (herringbone) packing arrangement whereas mixed solid hydrocarbon crystals (i.e. solid formed from mixtures of benzene and hexafluorobenzene) exhibit preferentially parallel displaced (PD) π-stacked arrangement. Gas phase spectroscopy of the dimeric complexes of the building blocks of solid pure benzene and mixed benzene-hexafluorobenzene adducts exhibit similar structural motifs observed in the corresponding crystal strcutures. In this talk, I will discuss about the jet-cooled dimeric complexes of indole with hexafluorobenzene and p-xylene in the gas phase using Resonant two photon ionzation and IR-UV double resonance spectroscopy combined with quantum chemistry calculations. In stead of studying benzene...p-xylene and benzene...hexafluorobenzene dimers, we have studied corresponding indole complexes because N-H group is much more sensitive IR probe compared to C-H group. We have observed that indole...hexafluorobenzene dimer has parallel displaced (PD) π-stacked structure whereas indole...p-xylene has slanted T-shaped structure. We have shown here selective switching of dimeric structure from T-shaped to π-stacked by changing the substituent from electron donating (-CH3) to electron withdrawing group (fluorine) in one of the complexing partners. Thus, our results demonstrate that efficient engineering of the non-covalent interactions can lead to efficient drug design and material synthesis.
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
NASA Technical Reports Server (NTRS)
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
Xyce parallel electronic simulator users guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users' guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
A design methodology for portable software on parallel computers
NASA Technical Reports Server (NTRS)
Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.
1993-01-01
This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
Xyce Parallel Electronic Simulator Users' Guide Version 6.6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2016 Sandia Corporation. All rights reserved. Acknowledgements The BSIM Group at the University of California, Berkeley developed the BSIM3, BSIM4, BSIM6, BSIM-CMG and BSIM-SOI models. The BSIM3 is Copyright c 1999, Regents of the University of California. The BSIM4 is Copyright c 2006, Regents of the University of California. The BSIM6 is Copyright c 2015, Regents of the University of California. The BSIM-CMG is Copyright c 2012 and 2016, Regents of the University of California. The BSIM-SOI is Copyright c 1990, Regents of the University of California. All rights reserved. The Mextram model has been developed by NXP Semiconductors until 2007, Delft University of Technology from 2007 to 2014, and Auburn University since April 2015. Copyrights c of Mextram are with Delft University of Technology, NXP Semiconductors and Auburn University. The MIT VS Model Research Group developed the MIT Virtual Source (MVS) model. Copyright c 2013 Massachusetts Institute of Technology (MIT). The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.
1995-01-01
High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.
Stability of Large Parallel Tunnels Excavated in Weak Rocks: A Case Study
NASA Astrophysics Data System (ADS)
Ding, Xiuli; Weng, Yonghong; Zhang, Yuting; Xu, Tangjin; Wang, Tuanle; Rao, Zhiwen; Qi, Zufang
2017-09-01
Diversion tunnels are important structures for hydropower projects but are always placed in locations with less favorable geological conditions than those in which other structures are placed. Because diversion tunnels are usually large and closely spaced, the rock pillar between adjacent tunnels in weak rocks is affected on both sides, and conventional support measures may not be adequate to achieve the required stability. Thus, appropriate reinforcement support measures are needed, and the design philosophy regarding large parallel tunnels in weak rocks should be updated. This paper reports a recent case in which two large parallel diversion tunnels are excavated. The rock masses are thin- to ultra-thin-layered strata coated with phyllitic films, which significantly decrease the soundness and strength of the strata and weaken the rocks. The behaviors of the surrounding rock masses under original (and conventional) support measures are detailed in terms of rock mass deformation, anchor bolt stress, and the extent of the excavation disturbed zone (EDZ), as obtained from safety monitoring and field testing. In situ observed phenomena and their interpretation are also included. The sidewall deformations exhibit significant time-dependent characteristics, and large magnitudes are recorded. The stresses in the anchor bolts are small, but the extents of the EDZs are large. The stability condition under the original support measures is evaluated as poor. To enhance rock mass stability, attempts are made to reinforce support design and improve safety monitoring programs. The main feature of these attempts is the use of prestressed cables that run through the rock pillar between the parallel tunnels. The efficacy of reinforcement support measures is verified by further safety monitoring data and field test results. Numerical analysis is constantly performed during the construction process to provide a useful reference for decision making. The calculated deformations are in good agreement with the measured data, and the calculated forces of newly added cables show that the designed reinforcement is necessary and ensures sufficient stability. Finally, the role of safety monitoring in the evaluation of rock mass stability and the consideration of tunnel group effect are discussed. The work described in this paper aims to deepen the understanding of rock mass behaviors of large parallel tunnels in weak rocks and to improve the design philosophy.
National Combustion Code Parallel Performance Enhancements
NASA Technical Reports Server (NTRS)
Quealy, Angela; Benyo, Theresa (Technical Monitor)
2002-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.
Johari, Sarika; Gandhi, Tejal
2016-01-01
Background: Incidences of side effects and relapses are very common in chronic ulcerative colitis patients after termination of the treatment. Aims and Objectives: This study aims to compare the treatment with monoherbal formulation of Holarrhena antidysenterica with Mesalamine in chronic ulcerative colitis patients with special emphasis to side effects and relapse. Settings and Design: Patients were enrolled from an Ayurveda Hospital and a private Hospital, Gujarat. The study was randomized, parallel group and single blind design. Materials and Methods: The protocol was approved by Institutional Human Research Ethics Committee of Anand Pharmacy College on 23rd Jan 2013. Three groups (n = 10) were treated with drug Mesalamine (Group I), monoherbal tablet (Group II) and combination of both (Group III) respectively. Baseline characteristics, factors affecting quality of life, chronicity of disease, signs and symptoms, body weight and laboratory investigations were recorded. Side effects and complications developed, if any were recorded during and after the study. Statistical Analysis Used: Results were expressed as mean ± SEM. Data was statistically evaluated using t-test, Wilcoxon test, Mann Whitney U test, Kruskal Wallis test and ANOVA, wherever applicable, using GraphPad Prism 6. Results: All the groups responded positively to the treatments. All the patients were positive for occult blood in stool which reversed significantly after treatment along with rise in hemoglobin. Patients treated with herbal tablets alone showed maximal reduction in abdominal pain, diarrhea, and bowel frequency and stool consistency scores than Mesalamine treated patients. Treatment with herbal tablet alone and in combination with Mesalamine significantly reduced the stool infection. Patients treated with herbal drug alone and in combination did not report any side effects, relapse or complications while 50% patients treated with Mesalamine exhibited the relapse with diarrhea and flatulence after drug withdrawal. Conclusion: Thus, monoherbal formulation alone and with Mesalamine was efficacious than Mesalamine alone in UC. PMID:28182023
A Parallel Genetic Algorithm for Automated Electronic Circuit Design
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)
2000-01-01
We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.
Development and evaluation of a study design typology for human research.
Carini, Simona; Pollock, Brad H; Lehmann, Harold P; Bakken, Suzanne; Barbour, Edward M; Gabriel, Davera; Hagler, Herbert K; Harper, Caryn R; Mollah, Shamim A; Nahm, Meredith; Nguyen, Hien H; Scheuermann, Richard H; Sim, Ida
2009-11-14
A systematic classification of study designs would be useful for researchers, systematic reviewers, readers, and research administrators, among others. As part of the Human Studies Database Project, we developed the Study Design Typology to standardize the classification of study designs in human research. We then performed a multiple observer masked evaluation of active research protocols in four institutions according to a standardized protocol. Thirty-five protocols were classified by three reviewers each into one of nine high-level study designs for interventional and observational research (e.g., N-of-1, Parallel Group, Case Crossover). Rater classification agreement was moderately high for the 35 protocols (Fleiss' kappa = 0.442) and higher still for the 23 quantitative studies (Fleiss' kappa = 0.463). We conclude that our typology shows initial promise for reliably distinguishing study design types for quantitative human research.
Krogh, Jesper; Petersen, Lone; Timmermann, Michael; Saltin, Bengt; Nordentoft, Merete
2007-01-01
In western countries, the yearly incidence of depression is estimated to be 3-5% and the lifetime prevalence is 17%. In patient populations with chronic diseases the point prevalence may be 20%. Depression is associated with increased risk for various conditions such as osteoporoses, cardiovascular diseases, and dementia. WHO stated in 2000 that depression was the fourth leading cause of disease burden in terms of disability. In 2000 the cost of depression in the US was estimated to 83 billion dollars. A predominance of trials suggests that physical exercise has a positive effect on depressive symptoms. However, a meta-analysis from 2001 stated: "The effectiveness of exercise in reducing symptoms of depression cannot be determined because of a lack of good quality research on clinical populations with adequate follow-up." The major objective for this randomized trial is to compare the effect of non-aerobic, aerobic, and relaxation training on depressive symptoms using the blindly assessed Hamilton depression scale (HAM-D(17)) as primary outcome. The secondary outcome is the effect of the intervention on working status (i.e., lost days from work, employed/unemployed) and the tertiary outcomes consist of biological responses. The trial is designed as a randomized, parallel-group, observer-blinded clinical trial. Patients are recruited through general practitioners and psychiatrist and randomized to three different interventions: 1) non-aerobic, -- progressive resistance training, 2) aerobic training, -- cardio respiratory fitness, and 3) relaxation training with minimal impact on strength or cardio respiratory fitness. Training for all three groups takes place twice a week for 4 months. Evaluation of patients' symptoms takes place four and 12 months after inclusion. The trial is designed to include 45 patients in each group. Statistical analysis will be done as intention to treat (all randomized patients). Results from the DEMO trial will be reported according to the CONSORT guidelines in 2008-2009.
ARTS III/Parallel Processor Design Study
DOT National Transportation Integrated Search
1975-04-01
It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...
Milewski, Marek C; Kamel, Karol; Kurzynska-Kokorniak, Anna; Chmielewski, Marcin K; Figlerowicz, Marek
2017-10-01
Experimental methods based on DNA and RNA hybridization, such as multiplex polymerase chain reaction, multiplex ligation-dependent probe amplification, or microarray analysis, require the use of mixtures of multiple oligonucleotides (primers or probes) in a single test tube. To provide an optimal reaction environment, minimal self- and cross-hybridization must be achieved among these oligonucleotides. To address this problem, we developed EvOligo, which is a software package that provides the means to design and group DNA and RNA molecules with defined lengths. EvOligo combines two modules. The first module performs oligonucleotide design, and the second module performs oligonucleotide grouping. The software applies a nearest-neighbor model of nucleic acid interactions coupled with a parallel evolutionary algorithm to construct individual oligonucleotides, and to group the molecules that are characterized by the weakest possible cross-interactions. To provide optimal solutions, the evolutionary algorithm sorts oligonucleotides into sets, preserves preselected parts of the oligonucleotides, and shapes their remaining parts. In addition, the oligonucleotide sets can be designed and grouped based on their melting temperatures. For the user's convenience, EvOligo is provided with a user-friendly graphical interface. EvOligo was used to design individual oligonucleotides, oligonucleotide pairs, and groups of oligonucleotide pairs that are characterized by the following parameters: (1) weaker cross-interactions between the non-complementary oligonucleotides and (2) more uniform ranges of the oligonucleotide pair melting temperatures than other available software products. In addition, in contrast to other grouping algorithms, EvOligo offers time-efficient sorting of paired and unpaired oligonucleotides based on various parameters defined by the user.
Penn State University ground software support for X-ray missions.
NASA Astrophysics Data System (ADS)
Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.
1995-03-01
The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Xyce parallel electronic simulator : users' guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less
Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision
ERIC Educational Resources Information Center
Hinkle, Michelle Gimenez
2008-01-01
This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.
Parallel Processing at the High School Level.
ERIC Educational Resources Information Center
Sheary, Kathryn Anne
This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…
Li, Mo; Lyu, Ji-Hui; Zhang, Yi; Gao, Mao-Long; Li, Wen-Jie; Ma, Xin
2017-12-01
Alzheimer disease (AD) is one of the most common diseases among the older adults. Currently, various nonpharmacological interventions are used for the treatment of AD. Such as reminiscence therapy is being widely used in Western countries. However, it is often used as an empirical application in China; the evidence-based efficacy of reminiscence therapy in AD patients remains to be determined. Therefore, the aim of this research is to assess the effectives of reminiscence therapy for Chinese elderly. This is a randomized parallel-design controlled trial. Mild and moderate AD patients who are in the Beijing Geriatric Hospital, China will be randomized into control and intervention groups (n = 45 for each group). For the intervention group, along with conventional drug therapy, participants will be exposed to a reminiscence therapy of 35 to 45 minutes, 2 times/wk for 12 consecutive weeks. Patients in the control group will undergo conventional drug therapy only. The primary outcome measure will be the differences in Alzheimer disease Assessment Scale-Cognitive Section Score. The secondary outcome measures will be the differences in the Cornell scale for depression in dementia, Neuropsychiatric Inventory score, and Barthel Index scores at baseline, at 4 and 12 weeks of treatment, and 12 weeks after treatment. The protocols have been approved by the ethics committee of Beijing Geriatric Hospital of China (approval no. 2015-010). Findings will be disseminated through presentation at scientific conferences and in academic journals. Chinese Clinical Trial Registry identifier ChiCTR-INR-16009505. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.
Fringe Capacitance of a Parallel-Plate Capacitor.
ERIC Educational Resources Information Center
Hale, D. P.
1978-01-01
Describes an experiment designed to measure the forces between charged parallel plates, and determines the relationship among the effective electrode area, the measured capacitance values, and the electrode spacing of a parallel plate capacitor. (GA)
Hoppe, Michael; Ross, Alastair B; Svelander, Cecilia; Sandberg, Ann-Sofie; Hulthén, Lena
2018-05-23
To investigate the effects of eating wholegrain rye bread with high or low amounts of phytate on iron status in women under free-living conditions. In this 12-week, randomized, parallel-design intervention study, 102 females were allocated into two groups, a high-phytate-bread group or a low-phytate-bread group. These two groups were administered: 200 g of blanched wholegrain rye bread/day, or 200 g dephytinized wholegrain rye bread/day. The bread was administered in addition to their habitual daily diet. Iron status biomarkers and plasma alkylresorcinols were analyzed at baseline and post-intervention. Fifty-five females completed the study. In the high-phytate-bread group (n = 31) there was no change in any of the iron status biomarkers after 12 weeks of intervention (p > 0.05). In the low-phytate bread group (n = 24) there were significant decreases in both ferritin (mean = 12%; from 32 ± 7 to 27 ± 6 µg/L, geometric mean ± SEM, p < 0.018) and total body iron (mean = 12%; from 6.9 ± 1.4 to 5.4 ± 1.1 mg/kg, p < 0.035). Plasma alkylresorcinols indicated that most subjects complied with the intervention. In Swedish females of reproductive age, 12 weeks of high-phytate wholegrain bread consumption had no effect on iron status. However, consumption of low-phytate wholegrain bread for 12 weeks resulted in a reduction of markers of iron status. Although single-meal studies clearly show an increase in iron bioavailability from dephytinization of cereals, medium-term consumption of reduced phytate bread under free-living conditions suggests that this strategy does not work to improve iron status in healthy women of reproductive age.
ERIC Educational Resources Information Center
Bluemel, Brody
2014-01-01
This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…
NASA Astrophysics Data System (ADS)
Yusepa, B. G. P.; Kusumah, Y. S.; Kartasasmita, B. G.
2018-01-01
The aim of this study is to get an in-depth understanding of students’ abstract-thinking ability in mathematics learning. This study was an experimental research with pre-test and post-test control group design. The subject of this study was eighth-grade students from two junior high schools in Bandung. In each schools, two parallel groups were selected and assigned into control and experimental groups. The experimental group was exposed to Cognitive Apprenticeship Instruction (CAI) treatment, whereas the control group was exposed to conventional learning. The results showed that abstract-thinking ability of students in experimental group was better than that of those in control group in which it could be observed from the overall and school level. It could be concluded that CAI could be a good alternative learning model to enhance students’ abstract-thinking ability.
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
NASA Astrophysics Data System (ADS)
Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri
2017-12-01
Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a building with building parts containing irregular configuration of non-parallel system, make it more rigid by forming a triangle module, and use the formula.A good collaboration is needed between architects and structural experts in creating earthquake architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less
Fernandez, Elizabeth; Bergado Rosado, Jorge A.; Rodriguez Perez, Daymi; Salazar Santana, Sonia; Torres Aguilar, Maydane; Bringas, Maria Luisa
2017-01-01
Many training programs have been designed using modern software to restore the impaired cognitive functions in patients with acquired brain damage (ABD). The objective of this study was to evaluate the effectiveness of a computer-based training program of attention and memory in patients with ABD, using a two-armed parallel group design, where the experimental group (n = 50) received cognitive stimulation using RehaCom software, and the control group (n = 30) received the standard cognitive stimulation (non-computerized) for eight weeks. In order to assess the possible cognitive changes after the treatment, a post-pre experimental design was employed using the following neuropsychological tests: Wechsler Memory Scale (WMS) and Trail Making test A and B. The effectiveness of the training procedure was statistically significant (p < 0.05) when it established the comparison between the performance in these scales, before and after the training period, in each patient and between the two groups. The training group had statistically significant (p < 0.001) changes in focused attention (Trail A), two subtests (digit span and logical memory), and the overall score of WMS. Finally, we discuss the advantages of computerized training rehabilitation and further directions of this line of work. PMID:29301194
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
Two Parallel Olfactory Pathways for Processing General Odors in a Cockroach
Watanabe, Hidehiro; Nishino, Hiroshi; Mizunami, Makoto; Yokohari, Fumio
2017-01-01
In animals, sensory processing via parallel pathways, including the olfactory system, is a common design. However, the mechanisms that parallel pathways use to encode highly complex and dynamic odor signals remain unclear. In the current study, we examined the anatomical and physiological features of parallel olfactory pathways in an evolutionally basal insect, the cockroach Periplaneta americana. In this insect, the entire system for processing general odors, from olfactory sensory neurons to higher brain centers, is anatomically segregated into two parallel pathways. Two separate populations of secondary olfactory neurons, type1 and type2 projection neurons (PNs), with dendrites in distinct glomerular groups relay olfactory signals to segregated areas of higher brain centers. We conducted intracellular recordings, revealing olfactory properties and temporal patterns of both types of PNs. Generally, type1 PNs exhibit higher odor-specificities to nine tested odorants than type2 PNs. Cluster analyses revealed that odor-evoked responses were temporally complex and varied in type1 PNs, while type2 PNs exhibited phasic on-responses with either early or late latencies to an effective odor. The late responses are 30–40 ms later than the early responses. Simultaneous intracellular recordings from two different PNs revealed that a given odor activated both types of PNs with different temporal patterns, and latencies of early and late responses in type2 PNs might be precisely controlled. Our results suggest that the cockroach is equipped with two anatomically and physiologically segregated parallel olfactory pathways, which might employ different neural strategies to encode odor information. PMID:28529476
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
Optimization under uncertainty of parallel nonlinear energy sinks
NASA Astrophysics Data System (ADS)
Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe
2017-04-01
Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.
Pareek, Sonia; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Atri, Mansi; Singh, Kushpal
2015-01-01
Context: Individuals with special needs may have great limitations in oral hygiene performance due to their potential motor, sensory, and intellectual disabilities. Thus, oral health care utilization is low among the disabled people. Hearing disorders affect the general behavior and impair the level of social functioning. Objectives: The present study was conducted to assess the dental health outcomes following supervised tooth brushing among institutionalized hearing impaired and mute children in Jaipur, Rajasthan. Materials and Methods: The study followed a single-blind, parallel, and randomized controlled design. A total of 315 students were divided into three groups of 105 children each. Group A included resident students, who underwent supervised tooth brushing under the supervision of their parents. The non-resident students were further divided into two groups: Group B and Group C. Group B children were under the supervision of a caregiver and Group C children were under the supervision of both investigator and caregiver. Results: There was an average reduction in plaque score during the subsequent second follow-up conducted 3 weeks after the start of the study and in the final follow-up conducted at 6 weeks. There was also a marked reduction in the gingival index scores in all the three groups. Conclusion: The program of teacher and parent supervised toothbrushing with fluoride toothpaste can be safely targeted to socially deprived communities and can enable a significant reduction in plaque and gingival scores. Thus, an important principle of oral health education is the active involvement of parents and caregivers. PMID:26236676
Differential Draining of Parallel-Fed Propellant Tanks in Morpheus and Apollo Flight
NASA Technical Reports Server (NTRS)
Hurlbert, Eric; Guardado, Hector; Hernandez, Humberto; Desai, Pooja
2015-01-01
Parallel-fed propellant tanks are an advantageous configuration for many spacecraft. Parallel-fed tanks allow the center of gravity (cg) to be maintained over the engine(s), as opposed to serial-fed propellant tanks which result in a cg shift as propellants are drained from tank one tank first opposite another. Parallel-fed tanks also allow for tank isolation if that is needed. Parallel tanks and feed systems have been used in several past vehicles including the Apollo Lunar Module. The design of the feedsystem connecting the parallel tank is critical to maintain balance in the propellant tanks. The design must account for and minimize the effect of manufacturing variations that could cause delta-p or mass flowrate differences, which would lead to propellant imbalance. Other sources of differential draining will be discussed. Fortunately, physics provides some self-correcting behaviors that tend to equalize any initial imbalance. The question concerning whether or not active control of propellant in each tank is required or can be avoided or not is also important to answer. In order to provide data on parallel-fed tanks and differential draining in flight for cryogenic propellants (as well as any other fluid), a vertical test bed (flying lander) for terrestrial use was employed. The Morpheus vertical test bed is a parallel-fed propellant tank system that uses passive design to keep the propellant tanks balanced. The system is operated in blow down. The Morpheus vehicle was instrumented with a capacitance level sensor in each propellant tank in order to measure the draining of propellants in over 34 tethered and 12 free flights. Morpheus did experience an approximately 20 lb/m imbalance in one pair of tanks. The cause of this imbalance will be discussed. This paper discusses the analysis, design, flight simulation vehicle dynamic modeling, and flight test of the Morpheus parallel-fed propellant. The Apollo LEM data is also examined in this summary report of the flight data.
Statistical design of quantitative mass spectrometry-based proteomic experiments.
Oberg, Ann L; Vitek, Olga
2009-05-01
We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.
National Combustion Code: Parallel Performance
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2001-01-01
This report discusses the National Combustion Code (NCC). The NCC is an integrated system of codes for the design and analysis of combustion systems. The advanced features of the NCC meet designers' requirements for model accuracy and turn-around time. The fundamental features at the inception of the NCC were parallel processing and unstructured mesh. The design and performance of the NCC are discussed.
NASA Astrophysics Data System (ADS)
Stuart, J. A.
2011-12-01
This paper explores the challenges in implementing a message passing interface usable on systems with data-parallel processors, and more specifically GPUs. As a case study, we design and implement the ``DCGN'' API on NVIDIA GPUs that is similar to MPI and allows full access to the underlying architecture. We introduce the notion of data-parallel thread-groups as a way to map resources to MPI ranks. We use a method that also allows the data-parallel processors to run autonomously from user-written CPU code. In order to facilitate communication, we use a sleep-based polling system to store and retrieve messages. Unlike previous systems, our method provides both performance and flexibility. By running a test suite of applications with different communication requirements, we find that a tolerable amount of overhead is incurred, somewhere between one and five percent depending on the application, and indicate the locations where this overhead accumulates. We conclude that with innovations in chipsets and drivers, this overhead will be mitigated and provide similar performance to typical CPU-based MPI implementations while providing fully-dynamic communication.
A Programming Framework for Scientific Applications on CPU-GPU Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, John
2013-03-24
At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiplemore » parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.« less
Ropes: Support for collective opertions among distributed threads
NASA Technical Reports Server (NTRS)
Haines, Matthew; Mehrotra, Piyush; Cronk, David
1995-01-01
Lightweight threads are becoming increasingly useful in supporting parallelism and asynchronous control structures in applications and language implementations. Recently, systems have been designed and implemented to support interprocessor communication between lightweight threads so that threads can be exploited in a distributed memory system. Their use, in this setting, has been largely restricted to supporting latency hiding techniques and functional parallelism within a single application. However, to execute data parallel codes independent of other threads in the system, collective operations and relative indexing among threads are required. This paper describes the design of ropes: a scoping mechanism for collective operations and relative indexing among threads. We present the design of ropes in the context of the Chant system, and provide performance results evaluating our initial design decisions.
Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less
Kypri, Kypros; McCambridge, Jim; Wilson, Amanda; Attia, John; Sheeran, Paschal; Bowe, Steve; Vater, Tina
2011-02-14
What study participants think about the nature of a study has been hypothesised to affect subsequent behaviour and to potentially bias study findings. In this trial we examine the impact of awareness of study design and allocation on participant drinking behaviour. A three-arm parallel group randomised controlled trial design will be used. All recruitment, screening, randomisation, and follow-up will be conducted on-line among university students. Participants who indicate a hazardous level of alcohol consumption will be randomly assigned to one of three groups. Group A will be informed their drinking will be assessed at baseline and again in one month (as in a cohort study design). Group B will be told the study is an intervention trial and they are in the control group. Group C will be told the study is an intervention trial and they are in the intervention group. All will receive exactly the same brief educational material to read. After one month, alcohol intake for the past 4 weeks will be assessed. The experimental manipulations address subtle and previously unexplored ways in which participant behaviour may be unwittingly influenced by standard practice in trials. Given the necessity of relying on self-reported outcome, it will not be possible to distinguish true behaviour change from reporting artefact. This does not matter in the present study, as any effects of awareness of study design or allocation involve bias that is not well understood. There has been little research on awareness effects, and our outcomes will provide an indication of the possible value of further studies of this type and inform hypothesis generation. Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12610000846022.
Talboom-Kamp, Esther P W A; Verdijk, Noortje A; Kasteleyn, Marise J; Harmans, Lara M; Talboom, Irvin J S H; Numans, Mattijs E; Chavannes, Niels H
2017-09-27
To analyse the effect on therapeutic control and self-management skills of the implementation of self-management programmes, including eHealth by e-learning versus group training. Primary Care Thrombosis Service Center. Of the 247 oral anticoagulation therapy (OAT) patients, 63 started self-management by e-learning, 74 self-management by group training and 110 received usual care. Parallel cohort design with two randomised self-management groups (e-learning and group training) and a group receiving usual care. The effect of implementation of self-management on time in therapeutic range (TTR) was analysed with multilevel linear regression modelling. Usage of a supporting eHealth platform and the impact on self-efficacy (Generalised Self-Efficacy Scale (GSES)) and education level were analysed with linear regression analysis. After intervention, TTR was measured in three time periods of 6 months. (1) TTR, severe complications,(2) usage of an eHealth platform,(3) GSES, education level. Analysis showed no significant differences in TTR between the three time periods (p=0.520), the three groups (p=0.460) or the groups over time (p=0.263). Comparison of e-learning and group training showed no significant differences in TTR between the time periods (p=0.614), the groups (p=0.460) or the groups over time (p=0.263). No association was found between GSES and TTR (p=0.717) or education level and TTR (p=0.107). No significant difference was found between the self-management groups in usage of the platform (0-6 months p=0.571; 6-12 months p=0.866; 12-18 months p=0.260). The percentage of complications was low in all groups (3.2%; 1.4%; 0%). No differences were found between OAT patients trained by e-learning or by a group course regarding therapeutic control (TTR) and usage of a supporting eHealth platform. The TTR was similar in self-management and regular care patients. With adequate e-learning or group training, self-management seems safe and reliable for a selected proportion of motivated vitamin K antagonist patients. NTR3947. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
McWhannell, Nicola; Henaghan, Jayne L.
2018-01-01
This paper outlines the implementation of a programme of work that started with the development of a population-level children’s health, fitness and lifestyle study in 1996 (SportsLinx) leading to selected interventions one of which is described in detail: the Active City of Liverpool, Active Schools and SportsLinx (A-CLASS) Project. The A-CLASS Project aimed to quantify the effectiveness of structured and unstructured physical activity (PA) programmes on children’s PA, fitness, body composition, bone health, cardiac and vascular structures, fundamental movement skills, physical self-perception and self-esteem. The study was a four-arm parallel-group school-based cluster randomised controlled trial (clinical trials no. NCT02963805), and compared different exposure groups: a high intensity PA (HIPA) group, a fundamental movement skill (FMS) group, a PA signposting (PASS) group and a control group, in a two-schools-per-condition design. Baseline findings indicate that children’s fundamental movement skill competence levels are low-to-moderate, yet these skills are inversely associated with percentage body fat. Outcomes of this project will make an important contribution to the design and implementation of children’s PA promotion initiatives.
NASA Astrophysics Data System (ADS)
Yusepa, B. G. P.; Kusumah, Y. S.; Kartasasmita, B. G.
2018-03-01
This study aims to get an in-depth understanding of the enhancement of students’ mathematical representation. This study is experimental research with pretest-posttest control group design. The subject of this study is the students’ of the eighth grade from junior high schools in Bandung: high-level and middle-level. In each school, two parallel groups were chosen as a control group and an experimental group. The experimental group was given cognitive apprenticeship instruction (CAI) treatment while the control group was given conventional learning. The results show that the enhancement of students’ mathematical representation who obtained CAI treatment was better than the conventional one, viewed which can be observed from the overall, mathematical prior knowledge (MPK), and school level. It can be concluded that CAI can be used as a good alternative learning model to enhance students’ mathematical representation.
Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)
2000-01-01
HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin
2017-07-01
Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Scalable Unix commands for parallel processors : a high-performance implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, E.; Lusk, E.; Gropp, W.
2001-06-22
We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1989-01-01
A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.
[Clinical research. XIII. Research design contribution in the structured revision of an article].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2013-01-01
The quality of information obtained in accordance to research design is integrated to the revision structured in relation to the causality model, used in the article "Reduction in the Incidence of Nosocomial Pneumonia Poststroke by Using the 'Turn-mob' Program", which corresponds to a clinical trial design. Points to identify and analyze are ethical issues in order to safeguard the security and respect for patients, randomization that seek to create basal homogeneous groups, subjects with the same probability of receiving any of the maneuvers in comparison, with the same pre maneuver probability of adherence, and which facilitate the blinding of outcome measurement and the distribution between groups of subjects with the same probability of leaving the study for reasons beyond the maneuvers. Other aspects are the relativity of comparison, the blinding of the maneuver, the parallel application of comparative maneuver, early stopping, and analysis according to the degree of adherence. The analysis in accordance with the design is complementary, since it is done based on the architectural model of causality, and the statistical and clinical relevance consideration.
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
NASA Astrophysics Data System (ADS)
Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo
This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.
Fast I/O for Massively Parallel Applications
NASA Technical Reports Server (NTRS)
OKeefe, Matthew T.
1996-01-01
The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.
Simplified Parallel Domain Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson III, David J
2011-01-01
Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm
NASA Astrophysics Data System (ADS)
Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui
2017-05-01
The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.
Language Classification using N-grams Accelerated by FPGA-based Bloom Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, A; Gokhale, M
N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.
INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Larkman, David J.; Nunes, Rita G.
2007-04-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
La, Moonwoo; Park, Sang Min; Kim, Dong Sung
2015-01-01
In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-01-01
Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires. PMID:16995935
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-09-22
This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures--all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires.
ERIC Educational Resources Information Center
Herrenkohl, Ellen C.
1978-01-01
Group therapy participation and religious conversion have been cited as sources of personal growth by a number of formerly abusive parents. The parallels in the dynamics of change for the two kinds of experiences are discussed in the context of the factors thought to lead to abuse. (Author)
PISCES: An environment for parallel scientific computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.
SNSPD with parallel nanowires (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ejrnaes, Mikkel; Parlato, Loredana; Gaggero, Alessandro; Mattioli, Francesco; Leoni, Roberto; Pepe, Giampiero; Cristiano, Roberto
2017-05-01
Superconducting nanowire single-photon detectors (SNSPDs) have shown to be promising in applications such as quantum communication and computation, quantum optics, imaging, metrology and sensing. They offer the advantages of a low dark count rate, high efficiency, a broadband response, a short time jitter, a high repetition rate, and no need for gated-mode operation. Several SNSPD designs have been proposed in literature. Here, we discuss the so-called parallel nanowires configurations. They were introduced with the aim of improving some SNSPD property like detection efficiency, speed, signal-to-noise ratio, or photon number resolution. Although apparently similar, the various parallel designs are not the same. There is no one design that can improve the mentioned properties all together. In fact, each design presents its own characteristics with specific advantages and drawbacks. In this work, we will discuss the various designs outlining peculiarities and possible improvements.
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
2014-01-01
Background Cerebral palsy (CP) and brain injury (BI) are common conditions that have devastating effects on a child’s ability to use their hands. Hand splinting and task-specific training are two interventions that are often used to address deficits in upper limb skills, both in isolation or concurrently. The aim of this paper is to describe the method to be used to conduct two randomised controlled trials (RCT) investigating (a) the immediate effect of functional hand splints, and (b) the effect of functional hand splints used concurrently with task-specific training compared to functional hand splints alone, and to task-specific training alone in children with CP and BI. The Cognitive Orientation to Occupational Performance (CO-OP) approach will be the task-specific training approach used. Methods/Design Two concurrent trials; a two group, parallel design, RCT with a sample size of 30 participants (15 per group); and a three group, parallel design, assessor blinded, RCT with a sample size of 45 participants (15 per group). Inclusion criteria: age 4-15 years; diagnosis of CP or BI; Manual Abilities Classification System (MACS) level I – IV; hand function goals; impaired hand function; the cognitive, language and behavioural ability to participate in CO-OP. Participants will be randomly allocated to one of 3 groups; (1) functional hand splint only (n=15); (2) functional hand splint combined with task-specific training (n=15); (3) task-specific training only (n=15). Allocation concealment will be achieved using sequentially numbered, sealed opaque envelopes opened by an off-site officer after baseline measures. Treatment will be provided for a period of 2 weeks, with outcome measures taken at baseline, 1 hour after randomisation, 2 weeks and 10 weeks. The functional hand splint will be a wrist cock-up splint (+/- thumb support or supination strap). Task-specific training will involve 10 sessions of CO-OP provided in a group of 2-4 children. Primary outcome measures will be the Canadian Occupational Performance Measure (COPM) and the Goal Attainment Scale (GAS). Analysis will be conducted on an intention-to-treat basis. Discussion This paper outlines the protocol for two randomised controlled trials investigating functional hand splints and CO-OP for children with CP and BI. PMID:25023385
Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K
2018-01-01
Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint space control and work space control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Robin, Nicolas; Toussaint, Lucette; Coudevylle, Guillaume R; Ruart, Shelly; Hue, Olivier; Sinnapah, Stephane
2018-06-22
This study tested whether text messages prompting adults 50 years of age and older to perform mental imagery would increase aerobic physical activity (APA) duration using a randomized parallel trial design. Participants were assigned to an Imagery 1, Imagery 2, or placebo group. For 4 weeks, each group was exposed to two conditions (morning text message vs. no morning text message). In the morning message condition, the imagery groups received a text message with the instruction to mentally imagine performing an APA, and the placebo group received a placebo message. All participants received an evening text message of "Did you do your cardio today? If yes, what did you do?" for 3 days per week. Participants of the imagery groups reported significantly more weekly minutes of APA in the morning text message condition compared with the no morning message condition. Electronic messages were effective at increasing minutes of APA.
Xyce parallel electronic simulator design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting
2010-09-01
This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly beenmore » funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.« less
A Generic Mesh Data Structure with Parallel Applications
ERIC Educational Resources Information Center
Cochran, William Kenneth, Jr.
2009-01-01
High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
An embedded multi-core parallel model for real-time stereo imaging
NASA Astrophysics Data System (ADS)
He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu
2018-04-01
The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.
Displacement and deformation measurement for large structures by camera network
NASA Astrophysics Data System (ADS)
Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu
2014-03-01
A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.
NASA Technical Reports Server (NTRS)
Halpert, G.; Webb, D. A.
1983-01-01
Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.
NASA Technical Reports Server (NTRS)
Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele
2004-01-01
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
Research on retailer data clustering algorithm based on Spark
NASA Astrophysics Data System (ADS)
Huang, Qiuman; Zhou, Feng
2017-03-01
Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.
Embodied and Distributed Parallel DJing.
Cappelen, Birgitta; Andersson, Anders-Petter
2016-01-01
Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
Performance analysis of parallel branch and bound search with the hypercube architecture
NASA Technical Reports Server (NTRS)
Mraz, Richard T.
1987-01-01
With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.
ParallABEL: an R library for generalized parallelization of genome-wide association studies.
Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S
2010-04-29
Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.
NASA Technical Reports Server (NTRS)
Wasson, J. T.; Kallemeyn, G. W.
2002-01-01
We present new data or iron meteorites that are members of group IAB or are closely related to this large group, and we have also reevaluated some of our earlier data for these irons. In the past it was not possible to distinguish IAB and IIICD irons on the basis of their positions on element-Ni diagrams. We now find that plotting, the new and revised data yields six sets of compact fields on element-Au diagrams, each set corresponding to a compositional group. The largest set includes the majority (approximately equal to 70) of irons previously designated IA: We christened this set the IAB main group. The remaining five sets we designate subgroups within the IAB complex. Three of these subgroups have Au contents similar to the main group, and form parallel trends in most element-Ni diagrams. The groups originally designated IIIC and IIID are two of these subgroups: they are now well resolved from each other and from the main group. The other low-Au subgroup has Ni contents just above the main group. Two other IAB subgroups have appreciably higher Au contents than the main group and show weaker compositional links to it. We have named these five subgroups on the basis of their Au and Ni contents. The three subgroups having Au contents similar to the main group are the low-Au (L) subgroups the two others the high-Au (H) subgroups. The Ni contents are designated high (H), medium (M), or low (L). Thus the old group IIID is now the sLH subgroup. the old group IIIC is the sLM subgroup. In addition, eight irons assigned to two grouplets plot between sLL and sLM on most element-Au diagrams. A large number (27) of related irons plot outside these compact fields but nonetheless appear to be sufficiently related to also be included in the IAB complex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.
Scalable descriptive and correlative statistics with Titan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David C.; Pebay, Philippe Pierre
This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit
NASA Technical Reports Server (NTRS)
Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete;
1998-01-01
Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.
A design concept of parallel elasticity extracted from biological muscles for engineered actuators.
Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie
2016-08-23
Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.
Lee, Sang Ki; Kim, Kap Jung; Park, Kyung Hoon; Choy, Won Sik
2014-10-01
With the continuing improvements in implants for distal humerus fractures, it is expected that newer types of plates, which are anatomically precontoured, thinner and less irritating to soft tissue, would have comparable outcomes when used in a clinical study. The purpose of this study was to compare the clinical and radiographic outcomes in patients with distal humerus fractures who were treated with orthogonal and parallel plating methods using precontoured distal humerus plates. Sixty-seven patients with a mean age of 55.4 years (range 22-90 years) were included in this prospective study. The subjects were randomly assigned to receive 1 of 2 treatments: orthogonal or parallel plating. The following results were assessed: operating time, time to fracture union, presence of a step or gap at the articular margin, varus-valgus angulation, functional recovery, and complications. No intergroup differences were observed based on radiological and clinical results between the groups. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes, mean operation time, union time, or complication rates. There were no cases of fracture nonunion in either group; heterotrophic ossification was found 3 patients in orthogonal plating group and 2 patients in parallel plating group. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes or complication rates. However, orthogonal plating method may be preferred in cases of coronal shear fractures, where posterior to anterior fixation may provide additional stability to the intraarticular fractures. Additionally, parallel plating method may be the preferred technique used for fractures that occur at the most distal end of the humerus.
Parallel Logic Programming and Parallel Systems Software and Hardware
1989-07-29
Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted
NASA Technical Reports Server (NTRS)
Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry
1998-01-01
Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.
MARBLE: A system for executing expert systems in parallel
NASA Technical Reports Server (NTRS)
Myers, Leonard; Johnson, Coe; Johnson, Dean
1990-01-01
This paper details the MARBLE 2.0 system which provides a parallel environment for cooperating expert systems. The work has been done in conjunction with the development of an intelligent computer-aided design system, ICADS, by the CAD Research Unit of the Design Institute at California Polytechnic State University. MARBLE (Multiple Accessed Rete Blackboard Linked Experts) is a system of C Language Production Systems (CLIPS) expert system tool. A copied blackboard is used for communication between the shells to establish an architecture which supports cooperating expert systems that execute in parallel. The design of MARBLE is simple, but it provides support for a rich variety of configurations, while making it relatively easy to demonstrate the correctness of its parallel execution features. In its most elementary configuration, individual CLIPS expert systems execute on their own processors and communicate with each other through a modified blackboard. Control of the system as a whole, and specifically of writing to the blackboard is provided by one of the CLIPS expert systems, an expert control system.
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
Design of the free-air ionization chamber, FAC-IR-150, for X-ray dosimetry
NASA Astrophysics Data System (ADS)
Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein
2018-03-01
The primary standard for X-ray dosimetry is based on the free-air ionization chamber (FAC). Therefore, the Atomic Energy Organization of Iran (AEOI) designed the free-air ionization chamber, FAC-IR-150, for low and medium energy X-ray dosimetry. The purpose of this work is the study of the free-air ionization chamber characteristics and the design of the FAC-IR-150. The FAC-IR-150 dosimeter has two parallel plates, a high voltage plate and a collector plate. A guard electrode surrounds the collector and is separated by an air gap. A group of guard strips is used between up and down electrodes to produce a uniform electric field in all the ion chamber volume. This design involves introducing the correction factors and determining the exact dimensions of the ionization chamber by using Monte Carlo simulation.
An overview of confounding. Part 1: the concept and how to address it.
Howards, Penelope P
2018-04-01
Confounding is an important source of bias, but it is often misunderstood. We consider how confounding occurs and how to address confounding using examples. Study results are confounded when the effect of the exposure on the outcome, mixes with the effects of other risk and protective factors for the outcome. This problem arises when these factors are present to different degrees among the exposed and unexposed study participants, but not all differences between the groups result in confounding. Thinking about an ideal study where all of the population of interest is exposed in one universe and is unexposed in a parallel universe helps to distinguish confounders from other differences. In an actual study, an observed unexposed population is chosen to stand in for the unobserved parallel universe. Differences between this substitute population and the parallel universe result in confounding. Confounding by identified factors can be addressed analytically and through study design, but only randomization has the potential to address confounding by unmeasured factors. Nevertheless, a given randomized study may still be confounded. Confounded study results can lead to incorrect conclusions about the effect of the exposure of interest on the outcome. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.
Fine-grained parallel RNAalifold algorithm for RNA secondary structure prediction on FPGA
Xia, Fei; Dou, Yong; Zhou, Xingming; Yang, Xuejun; Xu, Jiaqing; Zhang, Yang
2009-01-01
Background In the field of RNA secondary structure prediction, the RNAalifold algorithm is one of the most popular methods using free energy minimization. However, general-purpose computers including parallel computers or multi-core computers exhibit parallel efficiency of no more than 50%. Field Programmable Gate-Array (FPGA) chips provide a new approach to accelerate RNAalifold by exploiting fine-grained custom design. Results RNAalifold shows complicated data dependences, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic array structure including one master Processing Element (PE) and multiple slave PEs for fine grain hardware implementation on FPGA. We exploit data reuse schemes to reduce the need to load energy matrices from external memory. We also propose several methods to reduce energy table parameter size by 80%. Conclusion To our knowledge, our implementation with 16 PEs is the only FPGA accelerator implementing the complete RNAalifold algorithm. The experimental results show a factor of 12.2 speedup over the RNAalifold (ViennaPackage – 1.6.5) software for a group of aligned RNA sequences with 2981-residue running on a Personal Computer (PC) platform with Pentium 4 2.6 GHz CPU. PMID:19208138
Multiprocessor graphics computation and display using transputers
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
A package of two-dimensional graphics routines was developed to run on a transputer-based parallel processing system. These routines were designed to enable applications programmers to easily generate and display results from the transputer network in a graphic format. The graphics procedures were designed for the lowest possible network communication overhead for increased performance. The routines were designed for ease of use and to present an intuitive approach to generating graphics on the transputer parallel processing system.
Neural network architecture for form and motion perception (Abstract Only)
NASA Astrophysics Data System (ADS)
Grossberg, Stephen
1991-08-01
Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.
[Effect of urapidil combined with phentolamine on hypertension during extracorporeal circulation].
Wang, Fangjun; Chen, Bin; Liu, Yang; Tu, Faping
2014-08-01
To study the effect of urapidil combined with phentolamine in the management of hypertension during extracorporeal circulation. Ninety patients undergoing aortic and mitral valve replacement were randomly divided into 3 equal groups to receive treatment with phentolamine (group A), urapidil (group B), or both (group C) during extracorporeal circulation. The mean arterial pressure (MAP) before and after drug administration, time interval of two administrations, spontaneous recovery of heart beat after aorta unclamping, ventricular arrhythmia, changes of ST-segment 1 min after the recovery of heart beat, ante-parallel cycle time, aorta clamping time, post-parallel cycle time, dopamine dose after cardiac resuscitation, and perioperative changes of plasma TNF-α and IL-6 levels were recorded. There was no significant difference in MAP between the 3 groups before or after hypotensive drug administration (P>0.05). The time interval of two hypotensive drug administrations was longer in group C than in groups A and B (P<0.05). The incidence of spontaneous recovery of heart beat after aorta unclamping, incidence of ventricular arrhythmia, changes of ST-segment 1 min after the recovery of heart beat, ante-parallel cycle time, aorta clamping time, and post-parallel cycle time were all comparable between the 3 groups. The dose of dopamine administered after cardiac resuscitation was significantly larger in group B than in groups A or group C (P<0.05). The plasma levels of TNF-α and IL-6 were significantly increased after CPB and after the operation in all the groups, but were lowed in group C than in groups A and B at the end of CPB and at 2 h and 12 after the operation. Urapidil combined with phentolamine can control hypertension during extracorporeal circulation without causing hypotension.
Transmission Index Research of Parallel Manipulators Based on Matrix Orthogonal Degree
NASA Astrophysics Data System (ADS)
Shao, Zhu-Feng; Mo, Jiao; Tang, Xiao-Qiang; Wang, Li-Ping
2017-11-01
Performance index is the standard of performance evaluation, and is the foundation of both performance analysis and optimal design for the parallel manipulator. Seeking the suitable kinematic indices is always an important and challenging issue for the parallel manipulator. So far, there are extensive studies in this field, but few existing indices can meet all the requirements, such as simple, intuitive, and universal. To solve this problem, the matrix orthogonal degree is adopted, and generalized transmission indices that can evaluate motion/force transmissibility of fully parallel manipulators are proposed. Transmission performance analysis of typical branches, end effectors, and parallel manipulators is given to illustrate proposed indices and analysis methodology. Simulation and analysis results reveal that proposed transmission indices possess significant advantages, such as normalized finite (ranging from 0 to 1), dimensionally homogeneous, frame-free, intuitive and easy to calculate. Besides, proposed indices well indicate the good transmission region and relativity to the singularity with better resolution than the traditional local conditioning index, and provide a novel tool for kinematic analysis and optimal design of fully parallel manipulators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less
Work stealing for GPU-accelerated parallel programs in a global address space framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Design of a MIMD neural network processor
NASA Astrophysics Data System (ADS)
Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.
1994-03-01
The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.
Methods for design and evaluation of parallel computating systems (The PISCES project)
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.; Wise, Robert; Haught, Mary JO
1989-01-01
The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.
Parallel transmission RF pulse design with strict temperature constraints.
Deniz, Cem M; Carluccio, Giuseppe; Collins, Christopher
2017-05-01
RF safety in parallel transmission (pTx) is generally ensured by imposing specific absorption rate (SAR) limits during pTx RF pulse design. There is increasing interest in using temperature to ensure safety in MRI. In this work, we present a local temperature correlation matrix formalism and apply it to impose strict constraints on maximum absolute temperature in pTx RF pulse design for head and hip regions. Electromagnetic field simulations were performed on the head and hip of virtual body models. Temperature correlation matrices were calculated for four different exposure durations ranging between 6 and 24 min using simulated fields and body-specific constants. Parallel transmission RF pulses were designed using either SAR or temperature constraints, and compared with each other and unconstrained RF pulse design in terms of excitation fidelity and safety. The use of temperature correlation matrices resulted in better excitation fidelity compared with the use of SAR in parallel transmission RF pulse design (for the 6 min exposure period, 8.8% versus 21.0% for the head and 28.0% versus 32.2% for the hip region). As RF exposure duration increases (from 6 min to 24 min), the benefit of using temperature correlation matrices on RF pulse design diminishes. However, the safety of the subject is always guaranteed (the maximum temperature was equal to 39°C). This trend was observed in both head and hip regions, where the perfusion rates are very different. Copyright © 2017 John Wiley & Sons, Ltd.
Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen
2017-01-01
Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.
Biomechanical Comparison of Parallel and Crossed Suture Repair for Longitudinal Meniscus Tears.
Milchteim, Charles; Branch, Eric A; Maughon, Ty; Hughey, Jay; Anz, Adam W
2016-04-01
Longitudinal meniscus tears are commonly encountered in clinical practice. Meniscus repair devices have been previously tested and presented; however, prior studies have not evaluated repair construct designs head to head. This study compared a new-generation meniscus repair device, SpeedCinch, with a similar established device, Fast-Fix 360, and a parallel repair construct to a crossed construct. Both devices utilize self-adjusting No. 2-0 ultra-high molecular weight polyethylene (UHMWPE) and 2 polyether ether ketone (PEEK) anchors. Crossed suture repair constructs have higher failure loads and stiffness compared with simple parallel constructs. The newer repair device would exhibit similar performance to an established device. Controlled laboratory study. Sutures were placed in an open fashion into the body and posterior horn regions of the medial and lateral menisci in 16 cadaveric knees. Evaluation of 2 repair devices and 2 repair constructs created 4 groups: 2 parallel vertical sutures created with the Fast-Fix 360 (2PFF), 2 crossed vertical sutures created with the Fast-Fix 360 (2XFF), 2 parallel vertical sutures created with the SpeedCinch (2PSC), and 2 crossed vertical sutures created with the SpeedCinch (2XSC). After open placement of the repair construct, each meniscus was explanted and tested to failure on a uniaxial material testing machine. All data were checked for normality of distribution, and 1-way analysis of variance by ranks was chosen to evaluate for statistical significance of maximum failure load and stiffness between groups. Statistical significance was defined as P < .05. The mean maximum failure loads ± 95% CI (range) were 89.6 ± 16.3 N (125.7-47.8 N) (2PFF), 72.1 ± 11.7 N (103.4-47.6 N) (2XFF), 71.9 ± 15.5 N (109.4-41.3 N) (2PSC), and 79.5 ± 25.4 N (119.1-30.9 N) (2XSC). Interconstruct comparison revealed no statistical difference between all 4 constructs regarding maximum failure loads (P = .49). Stiffness values were also similar, with no statistical difference on comparison (P = .28). Both devices in the current study had similar failure load and stiffness when 2 vertical or 2 crossed sutures were tested in cadaveric human menisci. Simple parallel vertical sutures perform similarly to crossed suture patterns at the time of implantation.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Li, B B; Lin, F; Cai, L H; Chen, Y; Lin, Z J
2017-08-01
Objective: To evaluate the effects of parallel versus perpendicular double plating for distal humerus fracture of type C. Methods: A standardized comprehensive literature search was performed by PubMed, Embase, Cochrane library, CMB, CNKI and Medline datebase.Randomized controlled studies on comparison between parallel versus perpendicular double plating for distal humerus fracture of type C before December 2015 were enrolled in the study.All date were analyzed by the RevMan 5.2 software. Results: Six studies, including 284 patients, met the inclusion criteria.There were 155 patients in perpendicular double plating group, 129 patients in parallel double plating group.The results of Meta-analysis indicated that there were statistically significant difference between the two groups in complications ( OR =2.59, 95% CI : 1.03 to 6.53, P =0.04). There was no significant difference between the two groups in surgical duration ( MD =-1.84, 95% CI : -9.06 to 5.39, P =0.62), bone union time ( MD =0.09, 95% CI : -0.06 to 0.24, P =0.22), Mayo Elbow Performance Score ( MD =0.09, 95% CI : -0.06 to 0.24, P =0.22), Range of Motions ( MD =-0.92, 95% CI : -4.65 to 2.81, P =0.63) and the rate of excellent and good results ( OR =0.64, 95% CI : 0.27 to 1.52, P =0.31). Conclusion: Both perpendicular and parallel double plating are effective in distal humerus fracture of type C, parallel double plating has less complications.
Integrating end-to-end threads of control into object-oriented analysis and design
NASA Technical Reports Server (NTRS)
Mccandlish, Janet E.; Macdonald, James R.; Graves, Sara J.
1993-01-01
Current object-oriented analysis and design methodologies fall short in their use of mechanisms for identifying threads of control for the system being developed. The scenarios which typically describe a system are more global than looking at the individual objects and representing their behavior. Unlike conventional methodologies that use data flow and process-dependency diagrams, object-oriented methodologies do not provide a model for representing these global threads end-to-end. Tracing through threads of control is key to ensuring that a system is complete and timing constraints are addressed. The existence of multiple threads of control in a system necessitates a partitioning of the system into processes. This paper describes the application and representation of end-to-end threads of control to the object-oriented analysis and design process using object-oriented constructs. The issue of representation is viewed as a grouping problem, that is, how to group classes/objects at a higher level of abstraction so that the system may be viewed as a whole with both classes/objects and their associated dynamic behavior. Existing object-oriented development methodology techniques are extended by adding design-level constructs termed logical composite classes and process composite classes. Logical composite classes are design-level classes which group classes/objects both logically and by thread of control information. Process composite classes further refine the logical composite class groupings by using process partitioning criteria to produce optimum concurrent execution results. The goal of these design-level constructs is to ultimately provide the basis for a mechanism that can support the creation of process composite classes in an automated way. Using an automated mechanism makes it easier to partition a system into concurrently executing elements that can be run in parallel on multiple processors.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
2012-09-30
platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes
Design of miniature type parallel coupled microstrip hairpin filter in UHF range
NASA Astrophysics Data System (ADS)
Hasan, Adib Belhaj; Rahman, Maj Tarikur; Kahhar, Azizul; Trina, Tasnim; Saha, Pran Kanai
2017-12-01
A microstrip parallel coupled line bandpass filter is designed in UHF range and the filter size is reduced by microstrip hairpin structure. The FR4 substrate is used as base material of the filter. The filter is analyzed by both ADS and CST design studio in the frequency range of 500 MHz to 650 MHz. The Bandwidth is found 13.27% with a center frequency 570 MHz. Simulation from both ADS and CST shows a very good agreement of performance of the filter.
Xyce Parallel Electronic Simulator Users' Guide Version 6.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one tomore » develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2017 Sandia Corporation. All rights reserved. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Xyce Parallel Electronic Simulator Users' Guide Version 6.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
NASA Technical Reports Server (NTRS)
Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
ParallABEL: an R library for generalized parallelization of genome-wide association studies
2010-01-01
Background Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Results Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Conclusions Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL. PMID:20429914
Accelerating large-scale protein structure alignments with graphics processing units
2012-01-01
Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
Design and analysis of all-dielectric broadband nonpolarizing parallel-plate beam splitters.
Wang, Wenliang; Xiong, Shengming; Zhang, Yundong
2007-06-01
Past research on the all-dielectric nonpolarizing beam splitter is reviewed. With the aid of the needle thin-film synthesis method and the conjugate graduate refine method, three different split ratio nonpolarizing parallel-plate beam splitters over a 200 nm spectral range centered at 550 nm with incidence angles of 45 degrees are designed. The chosen materials component and the initial stack are based on the Costich and Thelen theories. The results of design and analysis show that the designs maintain a very low polarization ratio in the working range of the spectrum and has a reasonable angular field.
Evaluation of fault-tolerant parallel-processor architectures over long space missions
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1989-01-01
The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
Signore, Antonio; Benedicenti, Stefano; Kaitsas, Vassilios; Barone, Michele; Angiero, Francesca; Ravera, Giambattista
2009-02-01
This retrospective study investigated the clinical effectiveness over up to 8 years of parallel-sided and of tapered glass-fiber posts, in combination with either hybrid composite or dual-cure composite resin core material, in endodontically treated, maxillary anterior teeth covered with full-ceramic crowns. The study population comprised 192 patients and 526 endodontically treated teeth, with various degrees of hard-tissue loss, restored by the post-and-core technique. Four groups were defined based on post shape and core build-up materials, and within each group post-and-core restorations were assigned randomly with respect to root morphology. Inclusion criteria were symptom-free endodontic therapy, root-canal treatment with a minimum apical seal of 4mm, application of rubber dam, need for post-and-core complex because of coronal tooth loss, and tooth with at least one residual coronal wall. Survival rate of the post-and-core restorations was determined using Kaplan-Meier statistical analysis. The restorations were examined clinically and radiologically; mean observation period was 5.3 years. The overall survival rate of glass-fiber post-and-core restorations was 98.5%. The survival rate for parallel-sided posts was 98.6% and for tapered posts was 96.8%. Survival rates for core build-up materials were 100% for dual-cure composite and 96.8% for hybrid light-cure composite. For both glass-fiber post designs and for both core build-up materials, clinical performance was satisfactory. Survival was higher for teeth retaining four and three coronal walls.
Kantsyrev, V L; Chuvatin, A S; Rudakov, L I; Velikovich, A L; Shrestha, I K; Esaulov, A A; Safronova, A S; Shlyaptseva, V V; Osborne, G C; Astanovitsky, A L; Weller, M E; Stafford, A; Schultz, K A; Cooper, M C; Cuneo, M E; Jones, B; Vesey, R A
2014-12-01
A compact Z-pinch x-ray hohlraum design with parallel-driven x-ray sources is experimentally demonstrated in a configuration with a central target and tailored shine shields at a 1.7-MA Zebra generator. Driving in parallel two magnetically decoupled compact double-planar-wire Z pinches has demonstrated the generation of synchronized x-ray bursts that correlated well in time with x-ray emission from a central reemission target. Good agreement between simulated and measured hohlraum radiation temperature of the central target is shown. The advantages of compact hohlraum design applications for multi-MA facilities are discussed.
A parallel form of the Gudjonsson Suggestibility Scale.
Gudjonsson, G H
1987-09-01
The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.
A Randomized Trial of a Web-based Intervention to Improve Migraine Self-Management and Coping
Bromberg, Jonas; Wood, Mollie E.; Black, Ryan A.; Surette, Daniel A.; Zacharoff, Kevin L.; Chiauzzi, Emil J.
2011-01-01
Objective Test the clinical efficacy of a web-based intervention designed to increase patient self-efficacy to perform headache self-management activities and symptom management strategies; and reduce migraine-related psychological distress. Background In spite of their demonstrated efficacy, behavioral interventions are used infrequently as an adjunct in medical treatment of migraine. Little clinical attention is paid to the behavioral factors that can help manage migraine more effectively, improve the quality of care, and improve quality of life. Access to evidenced-based, tailored, behavioral treatment is limited for many people with migraine. Design The study is a parallel group design with two conditions, (1) an experimental group exposed to the web intervention, and (2) a no-treatment control group that was not exposed to the intervention. Assessments for both groups were conducted at baseline (T1), 1-month (T2), 3-months (T3), and 6-months (T4). Results Compared to controls, participants in the experimental group reported significantly: increased headache self-efficacy, increased use of relaxation, increased use of social support, decreased pain catastrophizing, decreased depression, and decreased stress. The hypothesis that the intervention would reduce pain could not be tested. Conclusions Demonstrated increases in self-efficacy to perform headache self-management, increased use of positive symptom management strategies, and reported decreased migraine-related depression and stress, suggest that the intervention may be a useful behavioral adjunct to a comprehensive medical approach to managing migraine. PMID:22413151
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Structure-based Design and In-Parallel Synthesis of Inhibitors of AmpC b-lactamase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tondi, D.; Powers, R.A.; Negri, M.C.
2010-03-08
Group I {beta}-lactamases are a major cause of antibiotic resistance to {beta}-lactams such as penicillins and cephalosporins. These enzymes are only modestly affected by classic {beta}-lactam-based inhibitors, such as clavulanic acid. Conversely, small arylboronic acids inhibit these enzymes at sub-micromolar concentrations. Structural studies suggest these inhibitors bind to a well-defined cleft in the group I {beta}-lactamase AmpC; this cleft binds the ubiquitous R1 side chain of {beta}-lactams. Intriguingly, much of this cleft is left unoccupied by the small arylboronic acids. To investigate if larger boronic acids might take advantage of this cleft, structure-guided in-parallel synthesis was used to explore newmore » inhibitors of AmpC. Twenty-eight derivatives of the lead compound, 3-aminophenylboronic acid, led to an inhibitor with 80-fold better binding (2; K{sub i} 83 nM). Molecular docking suggested orientations for this compound in the R1 cleft. Based on the docking results, 12 derivatives of 2 were synthesized, leading to inhibitors with K{sub i} values of 60 nM and with improved solubility. Several of these inhibitors reversed the resistance of nosocomial Gram-positive bacteria, though they showed little activity against Gram-negative bacteria. The X-ray crystal structure of compound 2 in complex with AmpC was subsequently determined to 2.1 {angstrom} resolution. The placement of the proximal two-thirds of the inhibitor in the experimental structure corresponds with the docked structure, but a bond rotation leads to a distinctly different placement of the distal part of the inhibitor. In the experimental structure, the inhibitor interacts with conserved residues in the R1 cleft whose role in recognition has not been previously explored. Combining structure-based design with in-parallel synthesis allowed for the rapid exploration of inhibitor functionality in the R1 cleft of AmpC. The resulting inhibitors differ considerably from {beta}-lactams but nevertheless inhibit the enzyme well. The crystal structure of 2 (K{sub i} 83 nM) in complex with AmpC may guide exploration of a highly conserved, largely unexplored cleft, providing a template for further design against AmpC {beta}-lactamase.« less
Design and implementation of highly parallel pipelined VLSI systems
NASA Astrophysics Data System (ADS)
Delange, Alphonsus Anthonius Jozef
A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
TECA: A Parallel Toolkit for Extreme Climate Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, Mr; Ruebel, Oliver; Byna, Surendra
2012-03-12
We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.
Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation
NASA Technical Reports Server (NTRS)
Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.
1994-01-01
Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
2012-01-01
Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation) and melody (prosody) of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1) Rapid Syllable Transition Treatment and the 2) Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1) treatment effects at 1 week post will be similar for both treatments, 2) maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3) generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment. This protocol was approved by the Human Research Ethics Committee, University of Sydney (#12924). Discussion This will be the first randomised control trial to test treatment for CAS. It will be valuable for clinical decision-making and providing evidence-based services for children with CAS. Trial Registration Australian New Zealand Clinical Trials Registry: ACTRN12612000744853 PMID:22863021
NASA Astrophysics Data System (ADS)
Trakumas, S.; Salter, E.
2009-02-01
Adverse health effects due to exposure to airborne particles are associated with particle deposition within the human respiratory tract. Particle size, shape, chemical composition, and the individual physiological characteristics of each person determine to what depth inhaled particles may penetrate and deposit within the respiratory tract. Various particle inertial classification devices are available to fractionate airborne particles according to their aerodynamic size to approximate particle penetration through the human respiratory tract. Cyclones are most often used to sample thoracic or respirable fractions of inhaled particles. Extensive studies of different cyclonic samplers have shown, however, that the sampling characteristics of cyclones do not follow the entire selected convention accurately. In the search for a more accurate way to assess worker exposure to different fractions of inhaled dust, a novel sampler comprising several inertial impactors arranged in parallel was designed and tested. The new design includes a number of separated impactors arranged in parallel. Prototypes of respirable and thoracic samplers each comprising four impactors arranged in parallel were manufactured and tested. Results indicated that the prototype samplers followed closely the penetration characteristics for which they were designed. The new samplers were found to perform similarly for liquid and solid test particles; penetration characteristics remained unchanged even after prolonged exposure to coal mine dust at high concentration. The new parallel impactor design can be applied to approximate any monotonically decreasing penetration curve at a selected flow rate. Personal-size samplers that operate at a few L/min as well as area samplers that operate at higher flow rates can be made based on the suggested design. Performance of such samplers can be predicted with high accuracy employing well-established impaction theory.
Electromagnetic Design of a Magnetically-Coupled Spatial Power Combiner
NASA Technical Reports Server (NTRS)
Bulcha, B.; Cataldo, G.; Stevenson, T. R.; U-Yen, K.; Moseley, S. H.; Wollack, E. J.
2017-01-01
The design of a two-dimensional beam-combining network employing a parallel-plate superconducting waveguide with a mono-crystalline silicon dielectric is presented. This novel beam-combining network structure employs an array of magnetically coupled antenna elements to achieve high coupling efficiency and full sampling of the intensity distribution while avoiding diffractive losses in the multi-mode region defined by the parallel-plate waveguide. These attributes enable the structures use in realizing compact far-infrared spectrometers for astrophysical and instrumentation applications. When configured with a suitable corporate-feed power-combiner, this fully sampled array can be used to realize a low-sidelobe apodized response without incurring a reduction in coupling efficiency. To control undesired reflections over a wide range of angles in the finite-sized parallel-plate waveguide region, a wideband meta-material electromagnetic absorber structure is implemented. This adiabatic structure absorbs greater than 99 of the power over the 1.7:1 operational band at angles ranging from normal (0 degree) to near parallel (180 degree) incidence. Design, simulations, and application of the device will be presented.
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
Predicting the stability of a compressible periodic parallel jet flow
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H.
1996-01-01
It is known that mixing enhancement in compressible free shear layer flows with high convective Mach numbers is difficult. One design strategy to get around this is to use multiple nozzles. Extrapolating this design concept in a one dimensional manner, one arrives at an array of parallel rectangular nozzles where the smaller dimension is omega and the longer dimension, b, is taken to be infinite. In this paper, the feasibility of predicting the stability of this type of compressible periodic parallel jet flow is discussed. The problem is treated using Floquet-Bloch theory. Numerical solutions to this eigenvalue problem are presented. For the case presented, the interjet spacing, s, was selected so that s/omega =2.23. Typical plots of the eigenvalue and stability curves are presented. Results obtained for a range of convective Mach numbers from 3 to 5 show growth rates omega(sub i)=kc(sub i)/2 range from 0.25 to 0.29. These results indicate that coherent two-dimensional structures can occur without difficulty in multiple parallel periodic jet nozzles and that shear layer mixing should occur with this type of nozzle design.
Mathematical Abstraction: Constructing Concept of Parallel Coordinates
NASA Astrophysics Data System (ADS)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2017-09-01
Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.
Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++
NASA Technical Reports Server (NTRS)
Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis
1994-01-01
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.
Scan line graphics generation on the massively parallel processor
NASA Technical Reports Server (NTRS)
Dorband, John E.
1988-01-01
Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems.
Stone, John E; Gohara, David; Shi, Guochun
2010-05-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures.
da Costa Poubel, Luiz Augusto; de Gouvea, Cresus Vinicius Deppes; Calazans, Fernanda Signorelli; Dip, Etyene Castro; Alves, Wesley Veltri; Marins, Stella Soares; Barcelos, Roberta; Barceleiro, Marcos Oliveira
2018-04-25
This study evaluated the effect of the administration of pre-operative dexamethasone on tooth sensitivity stemming from in-office bleaching. A triple-blind, parallel-design, randomized clinical trial was conducted on 70 volunteers who received dexamethasone or placebo capsules. The drugs were administered in a protocol of three daily 8-mg doses of the drug, starting 48 h before the in-office bleaching treatment. Two bleaching sessions with 37.5% hydrogen peroxide gel were performed with a 1-week interval. Tooth sensitivity (TS) was recorded on visual analog scales (VAS) and numeric rating scales (NRS) in different periods up to 48 h after bleaching. The color evaluations were also performed. The absolute risk of TS and its intensity were evaluated by using Fisher's exact test. Comparisons of the TS intensity (NRS and VAS data) were performed by using the Mann-Whitney U test and a two-way repeated measures ANOVA and Tukey's test, respectively. In both groups, a high risk of TS (Dexa 80% x Placebo 94%) was detected. No significant difference was observed in terms of TS intensity. A whitening of approximately 3 shade guide units of the VITA Classical was detected in both groups, which were statistically similar. It was concluded that the administration pre-operatively of dexamethasone, in the proposed protocol, does not reduce the incidence or intensity of bleaching-induced tooth sensitivity. The use of dexamethasone drug before in-office bleaching treatment does not reduce incidence or intensity of tooth sensitivity. NCT02956070.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, J Jean; Tran-Dube,; #769
2011-08-03
Because of the critical roles of aberrant signaling in cancer, both c-MET and ALK receptor tyrosine kinases are attractive oncology targets for therapeutic intervention. The cocrystal structure of 3 (PHA-665752), bound to c-MET kinase domain, revealed a novel ATP site environment, which served as the target to guide parallel, multiattribute drug design. A novel 2-amino-5-aryl-3-benzyloxypyridine series was created to more effectively make the key interactions achieved with 3. In the novel series, the 2-aminopyridine core allowed a 3-benzyloxy group to reach into the same pocket as the 2,6-dichlorophenyl group of 3 via a more direct vector and thus with amore » better ligand efficiency (LE). Further optimization of the lead series generated the clinical candidate crizotinib (PF-02341066), which demonstrated potent in vitro and in vivo c-MET kinase and ALK inhibition, effective tumor growth inhibition, and good pharmaceutical properties.« less
Zhong, Lei; Wang, Dengqiang; Gan, Xiaoni; Yang, Tong; He, Shunping
2011-01-01
Group B of the Sox transcription factor family is crucial in embryo development in the insects and vertebrates. Sox group B, unlike the other Sox groups, has an unusually enlarged functional repertoire in insects, but the timing and mechanism of the expansion of this group were unclear. We collected and analyzed data for Sox group B from 36 species of 12 phyla representing the major metazoan clades, with an emphasis on arthropods, to reconstruct the evolutionary history of SoxB in bilaterians and to date the expansion of Sox group B in insects. We found that the genome of the bilaterian last common ancestor probably contained one SoxB1 and one SoxB2 gene only and that tandem duplications of SoxB2 occurred before the arthropod diversification but after the arthropod-nematode divergence, resulting in the basal repertoire of Sox group B in diverse arthropod lineages. The arthropod Sox group B repertoire expanded differently from the vertebrate repertoire, which resulted from genome duplications. The parallel increases in the Sox group B repertoires of the arthropods and vertebrates are consistent with the parallel increases in the complexity and diversification of these two important organismal groups. PMID:21305035
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Establishing a group of endpoints in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong
2016-02-02
A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.
Matsuoka, Ryosuke; Usuda, Mika; Masuda, Yasunobu; Kunou, Masaaki; Utsunomiya, Kazunori
2017-05-30
Lactic-fermented egg white (LE), produced by lactic acid fermentation of egg white, is an easy-to-consume form of egg white. Here we assessed the effect of daily consumption of LE for 8 weeks on serum total cholesterol (TC) levels. The study followed a double-blind, parallel-arm design and included 88 adult men with mild hypercholesterolemia (mean ± standard error) serum TC levels, 229 ± 1.6 mg/dL; range, 204-259 mg/dL). The subjects were randomly divided into three groups, which consumed LE containing 4, 6, or 8 g of protein daily for 8 weeks. Blood samples were collected before starting LE consumption (baseline) and at 4 and 8 weeks to measure serum TC and low-density lipoprotein cholesterol (LDL-C) levels. After 8 weeks of consumption, serum TC levels in the 8 g group decreased by 11.0 ± 3.7 mg/dL, a significant decrease compared to baseline (p < 0.05) and a significantly greater decrease than for the 4 g group (3.1 ± 3.4 mg/dL; p < 0.05). Serum LDL-C levels in the 8 g group decreased by 13.7 ± 3.1 mg/dL, again a significant decrease compared with baseline (p < 0.05) and a significantly greater decrease than that for the 4 g group (2.1 ± 2.9 mg/dL; p < 0.05). Consumption of LE for 8 weeks at a daily dose of 8 g of proteins reduced serum TC and LDL-C levels in men with mild hypercholesterolemia, suggesting this may be effective in helping to prevent arteriosclerotic diseases. This clinical trial was retrospectively registered with the Japan Medical Association Center for Clinical Trials, (JMA-IIA00279; registered on 13/03/2017; https://dbcentre3.jmacct.med.or.jp/JMACTR/App/JMACTRE02_04/JMACTRE02_04.aspx?kbn=3&seqno=6530 ).
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Joint Experiment on Scalable Parallel Processors (JESPP) Parallel Data Management
2006-05-01
management and analysis tool, called Simulation Data Grid ( SDG ). The design principles driving the design of SDG are: 1) minimize network communication...or SDG . In this report, an initial prototype implementation of this system is described. This project follows on earlier research, primarily...distributed logging system had some 2 limitations. These limitations will be described in this report, and how the SDG addresses these limitations. 3.0
Calvillo-Arbizu, Jorge; Roa-Romero, Laura M; Milán-Martín, José A; Aresté-Fosalba, Nuria; Tornero-Molina, Fernando; Macía-Heras, Manuel; Vega-Díaz, Nicanor
2014-01-01
A major obstacle that hinders the implementation of technological solutions in healthcare is the rejection of developed systems by users (healthcare professionals and patients), who consider that they do not adapt to their real needs. (1) To design technological architecture for the telecare of nephrological patients by applying a methodology that prioritises the involvement of users (professionals and patients) throughout the design and development process; (2) to show how users' needs can be determined and addressed by means of technology, increasing the acceptance level of the final systems. In order to determine the main current needs in Nephrology, a group of Spanish Nephrology Services was involved. Needs were recorded through semi-structured interviews with the medical team and questionnaires for professionals and patients. A set of requirements were garnered from professionals and patients. In parallel, the group of biomedical engineers identified requirements for patient telecare from a technological perspective. All of these requirements drove the design of modular architecture for the telecare of peritoneal dialysis and pre-dialysis patients. This work shows how it is possible to involve users in the whole process of design and development of a system. The result of this work is the design of adaptable modular architecture for the telecare of nephrological patients and it addresses the preferences and needs of patient and professional users consulted.
NASA Technical Reports Server (NTRS)
Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce
1996-01-01
A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.
Dose finding with the sequential parallel comparison design.
Wang, Jessie J; Ivanova, Anastasia
2014-01-01
The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.
Design of a 6-DOF upper limb rehabilitation exoskeleton with parallel actuated joints.
Chen, Yanyan; Li, Ge; Zhu, Yanhe; Zhao, Jie; Cai, Hegao
2014-01-01
In this paper, a 6-DOF wearable upper limb exoskeleton with parallel actuated joints which perfectly mimics human motions is proposed. The upper limb exoskeleton assists the movement of physically weak people. Compared with the existing upper limb exoskeletons which are mostly designed using a serial structure with large movement space but small stiffness and poor wearable ability, a prototype for motion assistance based on human anatomy structure has been developed in our design. Moreover, the design adopts balls instead of bearings to save space, which simplifies the structure and reduces the cost of the mechanism. The proposed design also employs deceleration processes to ensure that the transmission ratio of each joint is coincident.
Analysis and Design of ITER 1 MV Core Snubber
NASA Astrophysics Data System (ADS)
Wang, Haitian; Li, Ge
2012-11-01
The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.
A parallel-pipelined architecture for a multi carrier demodulator
NASA Astrophysics Data System (ADS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-03-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-01-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems
Stone, John E.; Gohara, David; Shi, Guochun
2010-01-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures. PMID:21037981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
Accessing and visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, G. Bruce; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids.
NASA Technical Reports Server (NTRS)
Beck, D.; Das, B.; Dickson, C.; Douglas, B.; Long, L.; Middour, K.; Reid, S.; Uber, J.; Walsh, G.; Wang, L.
1989-01-01
This summary presents the main conclusions and results of the design studies conducted by a group of 13 students at the University of Maryland. The students, all participants in the spring 1989 course ENEE418 in the Electrical Engineering Department, met weekly in a two-hour class to discuss and evaluate design alternatives. The main problem considered was the design and control of a planar testbed simulating a free-flying space robot for applications in satellite servicing. This project grew out of the 1988 class where a dual-armed free flyer (DAFF) was designed and partially built. This year, a group of six students continued the development of the DAFF, achieving computer-controlled motion of the DAFF's arms. All fabrication and testing of the DAFF is being conducted in the Intelligent Servosystems Laboratory at the University of Maryland. While the work related to the design and development of the DAFF is the main subject of the report, it should be noted that other students in the ENEE418 class have investigated additional issues related to manipulation in space. For example, one group studied a new parallel linkage based manipulator for fine motion applications such as in assembly operations in space. They investigated the mechanism's kinematics, its reachable workspace, and precision of applying forces and torques. In yet another project, a student set out to measure and map the friction characteristics of the actuators used in the Modular Dextrous Hand, which has been recently developed in the Intelligent Servosystems Laboratory. The results are expected to help compensate for this friction, which is a highly nonlinear disturbance and presents significant problems in high-precision, low-speed operations. This summary continues with the discussion of the results obtained by the group of students who have been working on developing the DAFF testbed.
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
Student leadership in small group science inquiry
NASA Astrophysics Data System (ADS)
Oliveira, Alandeom W.; Boz, Umit; Broadwell, George A.; Sadler, Troy D.
2014-09-01
Background: Science educators have sought to structure collaborative inquiry learning through the assignment of static group roles. This structural approach to student grouping oversimplifies the complexities of peer collaboration and overlooks the highly dynamic nature of group activity. Purpose: This study addresses this issue of oversimplification of group dynamics by examining the social leadership structures that emerge in small student groups during science inquiry. Sample: Two small student groups investigating the burning of a candle under a jar participated in this study. Design and method: We used a mixed-method research approach that combined computational discourse analysis (computational quantification of social aspects of small group discussions) with microethnography (qualitative, in-depth examination of group discussions). Results: While in one group social leadership was decentralized (i.e., students shared control over topics and tasks), the second group was dominated by a male student (centralized social leadership). Further, decentralized social leadership was found to be paralleled by higher levels of student cognitive engagement. Conclusions: It is argued that computational discourse analysis can provide science educators with a powerful means of developing pedagogical models of collaborative science learning that take into account the emergent nature of group structures and highly fluid nature of student collaboration.
Matching pursuit parallel decomposition of seismic data
NASA Astrophysics Data System (ADS)
Li, Chuanhui; Zhang, Fanchang
2017-07-01
In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.
Research on Parallel Three Phase PWM Converters base on RTDS
NASA Astrophysics Data System (ADS)
Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun
2018-01-01
Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
The Tera Multithreaded Architecture and Unstructured Meshes
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Mavriplis, Dimitri J.
1998-01-01
The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.
Linden, Ariel; Yarnold, Paul R
2016-12-01
Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is being studied, the outcome variable is serially ordered as a time series and the intervention is expected to 'interrupt' the level and/or trend of the time series, subsequent to its introduction. Given that the internal validity of the design rests on the premise that the interruption in the time series is associated with the introduction of the treatment, treatment effects may seem less plausible if a parallel trend already exists in the time series prior to the actual intervention. Thus, sensitivity analyses should focus on detecting structural breaks in the time series before the intervention. In this paper, we introduce a machine-learning algorithm called optimal discriminant analysis (ODA) as an approach to determine if structural breaks can be identified in years prior to the initiation of the intervention, using data from California's 1988 voter-initiated Proposition 99 to reduce smoking rates. The ODA analysis indicates that numerous structural breaks occurred prior to the actual initiation of Proposition 99 in 1989, including perfect structural breaks in 1983 and 1985, thereby casting doubt on the validity of treatment effects estimated for the actual intervention when using a single-group ITSA design. Given the widespread use of ITSA for evaluating observational data and the increasing use of machine-learning techniques in traditional research, we recommend that structural break sensitivity analysis is routinely incorporated in all research using the single-group ITSA design. © 2016 John Wiley & Sons, Ltd.
Doronina-Amitonova, L. V.; Fedotov, I. V.; Ivashkina, O. I.; Zots, M. A.; Fedotov, A. B.; Anokhin, K. V.; Zheltikov, A. M.
2013-01-01
Seeing the big picture of functional responses within large neural networks in a freely functioning brain is crucial for understanding the cellular mechanisms behind the higher nervous activity, including the most complex brain functions, such as cognition and memory. As a breakthrough toward meeting this challenge, implantable fiber-optic interfaces integrating advanced optogenetic technologies and cutting-edge fiber-optic solutions have been demonstrated, enabling a long-term optogenetic manipulation of neural circuits in freely moving mice. Here, we show that a specifically designed implantable fiber-optic interface provides a powerful tool for parallel long-term optical interrogation of distinctly separate, functionally different sites in the brain of freely moving mice. This interface allows the same groups of neurons lying deeply in the brain of a freely behaving mouse to be reproducibly accessed and optically interrogated over many weeks, providing a long-term dynamic detection of genome activity in response to a broad variety of pharmacological and physiological stimuli. PMID:24253232
NASA Astrophysics Data System (ADS)
Doronina-Amitonova, L. V.; Fedotov, I. V.; Ivashkina, O. I.; Zots, M. A.; Fedotov, A. B.; Anokhin, K. V.; Zheltikov, A. M.
2013-11-01
Seeing the big picture of functional responses within large neural networks in a freely functioning brain is crucial for understanding the cellular mechanisms behind the higher nervous activity, including the most complex brain functions, such as cognition and memory. As a breakthrough toward meeting this challenge, implantable fiber-optic interfaces integrating advanced optogenetic technologies and cutting-edge fiber-optic solutions have been demonstrated, enabling a long-term optogenetic manipulation of neural circuits in freely moving mice. Here, we show that a specifically designed implantable fiber-optic interface provides a powerful tool for parallel long-term optical interrogation of distinctly separate, functionally different sites in the brain of freely moving mice. This interface allows the same groups of neurons lying deeply in the brain of a freely behaving mouse to be reproducibly accessed and optically interrogated over many weeks, providing a long-term dynamic detection of genome activity in response to a broad variety of pharmacological and physiological stimuli.
Photonics for aerospace sensors
NASA Astrophysics Data System (ADS)
Pellegrino, John; Adler, Eric D.; Filipov, Andree N.; Harrison, Lorna J.; van der Gracht, Joseph; Smith, Dale J.; Tayag, Tristan J.; Viveiros, Edward A.
1992-11-01
The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.
Experimental characterization of a binary actuated parallel manipulator
NASA Astrophysics Data System (ADS)
Giuseppe, Carbone
2016-05-01
This paper describes the BAPAMAN (Binary Actuated Parallel MANipulator) series of parallel manipulators that has been conceived at Laboratory of Robotics and Mechatronics (LARM). Basic common characteristics of BAPAMAN series are described. In particular, it is outlined the use of a reduced number of active degrees of freedom, the use of design solutions with flexural joints and Shape Memory Alloy (SMA) actuators for achieving miniaturization, cost reduction and easy operation features. Given the peculiarities of BAPAMAN architecture, specific experimental tests have been proposed and carried out with the aim to validate the proposed design and to evaluate the practical operation performance and the characteristics of a built prototype, in particular, in terms of operation and workspace characteristics.
File-access characteristics of parallel scientific workloads
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David; Purakayastha, Apratim; Best, Michael; Ellis, Carla Schlatter
1995-01-01
Phenomenal improvements in the computational performance of multiprocessors have not been matched by comparable gains in I/O system performance. This imbalance has resulted in I/O becoming a significant bottleneck for many scientific applications. One key to overcoming this bottleneck is improving the performance of parallel file systems. The design of a high-performance parallel file system requires a comprehensive understanding of the expected workload. Unfortunately, until recently, no general workload studies of parallel file systems have been conducted. The goal of the CHARISMA project was to remedy this problem by characterizing the behavior of several production workloads, on different machines, at the level of individual reads and writes. The first set of results from the CHARISMA project describe the workloads observed on an Intel iPSC/860 and a Thinking Machines CM-5. This paper is intended to compare and contrast these two workloads for an understanding of their essential similarities and differences, isolating common trends and platform-dependent variances. Using this comparison, we are able to gain more insight into the general principles that should guide parallel file-system design.
A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks
Geng, Tao; Gan, John Q.; Dyson, Matthew; Tsui, Chun SL; Sepulveda, Francisco
2008-01-01
A novel 4-class single-trial brain computer interface (BCI) based on two (rather than four or more) binary linear discriminant analysis (LDA) classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms. PMID:18584040
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
The BLAZE language: A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, P.; Vanrosendale, J.
1985-01-01
A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Photonic content-addressable memory system that uses a parallel-readout optical disk
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.
1995-11-01
We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.
RAMA: A file system for massively parallel computers
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1993-01-01
This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
100 Gbps Wireless System and Circuit Design Using Parallel Spread-Spectrum Sequencing
NASA Astrophysics Data System (ADS)
Scheytt, J. Christoph; Javed, Abdul Rehman; Bammidi, Eswara Rao; KrishneGowda, Karthik; Kallfass, Ingmar; Kraemer, Rolf
2017-09-01
In this article mixed analog/digital signal processing techniques based on parallel spread-spectrum sequencing (PSSS) and radio frequency (RF) carrier synchronization for ultra-broadband wireless communication are investigated on system and circuit level.
RTNN: The New Parallel Machine in Zaragoza
NASA Astrophysics Data System (ADS)
Sijs, A. J. V. D.
I report on the development of RTNN, a parallel computer designed as a 4^4 hypercube of 256 T9000 transputer nodes, each with 8 MB memory. The peak performance of the machine is expected to be 2.5 Gflops.
Suni, Jaana H; Rinne, Marjo; Tokola, Kari; Mänttäri, Ari; Vasankari, Tommi
2017-01-01
Neck and low back pain (LBP) are common in office workers. Exercise trials to reduce neck and LBP conducted in sport sector are lacking. We investigated the effectiveness of the standardised Fustra20Neck&Back exercise program for reducing pain and increasing fitness in office workers with recurrent non-specific neck and/or LBP. Volunteers were recruited through newspaper and Facebook. The design is a multi-centre randomised, two-arm, parallel group trial across 34 fitness clubs in Finland. Eligibility was determined by structured telephone interview. Instructors were specially educated professionals. Neuromuscular exercise was individually guided twice weekly for 10 weeks. Webropol survey, and objective measurements of fitness, physical activity, and sedentary behavior were conducted at baseline, and at 3 and 12 months. Mean differences between study groups (Exercise vs Control) were analysed using a general linear mixed model according to the intention-to-treat principle. At least moderate intensity pain (≥40 mm) in both the neck and back was detected in 44% of participants at baseline. Exercise compliance was excellent: 92% participated 15-20 times out of 20 possible. Intensity and frequency of neck pain, and strain in neck/shoulders decreased significantly in the Exercise group compared with the Control group. No differences in LBP and strain were detected. Neck/shoulder and trunk flexibility improved, as did quality of life in terms of pain and physical functioning. The Fustra20Neck&Back exercise program was effective for reducing neck/shoulder pain and strain, but not LBP. Evidence-based exercise programs of sports clubs have potential to prevent persistent, disabling musculoskeletal problems.
Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis
NASA Technical Reports Server (NTRS)
Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee
2016-01-01
The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.
Multivariable speed synchronisation for a parallel hybrid electric vehicle drivetrain
NASA Astrophysics Data System (ADS)
Alt, B.; Antritter, F.; Svaricek, F.; Schultalbers, M.
2013-03-01
In this article, a new drivetrain configuration of a parallel hybrid electric vehicle is considered and a novel model-based control design strategy is given. In particular, the control design covers the speed synchronisation task during a restart of the internal combustion engine. The proposed multivariable synchronisation strategy is based on feedforward and decoupled feedback controllers. The performance and the robustness properties of the closed-loop system are illustrated by nonlinear simulation results.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
High-Fidelity Simulation for Advanced Cardiac Life Support Training
Davis, Lindsay E.; Storjohann, Tara D.; Spiegel, Jacqueline J.; Beiber, Kellie M.
2013-01-01
Objective. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. Design. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). Assessment. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students’ knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. Conclusions. College curricula should incorporate simulation to complement but not replace lecture for ACLS education. PMID:23610477
High-fidelity simulation for advanced cardiac life support training.
Davis, Lindsay E; Storjohann, Tara D; Spiegel, Jacqueline J; Beiber, Kellie M; Barletta, Jeffrey F
2013-04-12
OBJECTIVE. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. DESIGN. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). ASSESSMENT. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students' knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. CONCLUSIONS. College curricula should incorporate simulation to complement but not replace lecture for ACLS education.
Study on the effects of microencapsulated Lactobacillus delbrueckii on the mouse intestinal flora.
Sun, Qingshen; Shi, Yue; Wang, Fuying; Han, Dequan; Lei, Hong; Zhao, Yao; Sun, Quan
2015-01-01
To evaluate the protective effects of microencapsulation on Lactobacillus delbrueckii by random, parallel experimental design. Lincomycin hydrochloride-induced intestinal malfunction mouse model was successfully established; then the L. delbrueckii microcapsule was given to the mouse. The clinical behaviour, number of intestinal flora, mucous IgA content in small intestine, IgG and IL-2 level in peripheral blood were monitored. The histological sections were also prepared. The L. delbrueckii microcapsule could have more probiotic effects as indicated by higher bifidobacterium number in cecal contents. The sIgA content in microcapsule treated group was significantly higher than that in non-encapsulated L. delbrueckii treated group (p < 0.05). Intestine pathological damage of the L. delbrueckii microcapsule-treated group showed obvious restoration. The L. delbrueckii microcapsules could relieve the intestinal tissue pathological damage and play an important role in curing antibiotic-induced intestinal flora dysfunction.
Computational design of d-peptide inhibitors of hepatitis delta antigen dimerization
NASA Astrophysics Data System (ADS)
Elkin, Carl D.; Zuccola, Harmon J.; Hogle, James M.; Joseph-McCarthy, Diane
2000-11-01
Hepatitis delta virus (HDV) encodes a single polypeptide called hepatitis delta antigen (DAg). Dimerization of DAg is required for viral replication. The structure of the dimerization region, residues 12 to 60, consists of an anti-parallel coiled coil [Zuccola et al., Structure, 6 (1998) 821]. Multiple Copy Simultaneous Searches (MCSS) of the hydrophobic core region formed by the bend in the helix of one monomer of this structure were carried out for many diverse functional groups. Six critical interaction sites were identified. The Protein Data Bank was searched for backbone templates to use in the subsequent design process by matching to these sites. A 14 residue helix expected to bind to the d-isomer of the target structure was selected as the template. Over 200 000 mutant sequences of this peptide were generated based on the MCSS results. A secondary structure prediction algorithm was used to screen all sequences, and in general only those that were predicted to be highly helical were retained. Approximately 100 of these 14-mers were model built as d-peptides and docked with the l-isomer of the target monomer. Based on calculated interaction energies, predicted helicity, and intrahelical salt bridge patterns, a small number of peptides were selected as the most promising candidates. The ligand design approach presented here is the computational analogue of mirror image phage display. The results have been used to characterize the interactions responsible for formation of this model anti-parallel coiled coil and to suggest potential ligands to disrupt it.
NASA Technical Reports Server (NTRS)
Braun, R. D.; Kroo, I. M.
1995-01-01
Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.
Tsutsui, Hiroyuki; Momomura, Shinichi; Saito, Yoshihiko; Ito, Hiroshi; Yamamoto, Kazuhiro; Ohishi, Tomomi; Okino, Naoko; Guo, Weinong
2017-09-01
The prognosis of heart failure patients with reduced ejection fraction (HFrEF) in Japan remains poor, although there is growing evidence for increasing use of evidence-based pharmacotherapies in Japanese real-world HF registries. Sacubitril/valsartan (LCZ696) is a first-in-class angiotensin receptor neprilysin inhibitor shown to reduce mortality and morbidity in the recently completed largest outcome trial in patients with HFrEF (PARADIGM-HF trial). The prospectively designed phase III PARALLEL-HF (Prospective comparison of ARNI with ACE inhibitor to determine the noveL beneficiaL trEatment vaLue in Japanese Heart Failure patients) study aims to assess the clinical efficacy and safety of LCZ696 in Japanese HFrEF patients, and show similar improvements in clinical outcomes as the PARADIGM-HF study enabling the registration of LCZ696 in Japan. This is a multicenter, randomized, double-blind, parallel-group, active controlled study of 220 Japanese HFrEF patients. Eligibility criteria include a diagnosis of chronic HF (New York Heart Association Class II-IV) and reduced ejection fraction (left ventricular ejection fraction ≤35%) and increased plasma concentrations of natriuretic peptides [N-terminal pro B-type natriuretic peptide (NT-proBNP) ≥600pg/mL, or NT-proBNP ≥400pg/mL for those who had a hospitalization for HF within the last 12 months] at the screening visit. The study consists of three phases: (i) screening, (ii) single-blind active LCZ696 run-in, and (iii) double-blind randomized treatment. Patients tolerating LCZ696 50mg bid during the treatment run-in are randomized (1:1) to receive LCZ696 100mg bid or enalapril 5mg bid for 4 weeks followed by up-titration to target doses of LCZ696 200mg bid or enalapril 10mg bid in a double-blind manner. The primary outcome is the composite of cardiovascular death or HF hospitalization and the study is an event-driven trial. The design of the PARALLEL-HF study is aligned with the PARADIGM-HF study and aims to assess the efficacy and safety of LCZ696 in Japanese HFrEF patients. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
Alfawal, Alaa M H; Hajeer, Mohammad Y; Ajaj, Mowaffak A; Hamadah, Omar; Brad, Bassel
2018-02-17
To evaluate the effectiveness of two minimally invasive surgical procedures in the acceleration of canine retraction: piezocision and laser-assisted flapless corticotomy (LAFC). Trial design: A single-centre randomized controlled trial with a compound design (two-arm parallel-group design and a split-mouth design for each arm). 36 Class II division I patients (12 males, 24 females; age range: 15 to 27 years) requiring first upper premolars extraction followed by canine retraction. piezocision group (PG; n = 18) and laser-assisted flapless corticotomy group (LG; n = 18). A split-mouth design was applied for each group where the flapless surgical intervention was randomly allocated to one side and the other side served as a control side. the rate of canine retraction (primary outcome), anchorage loss and canine rotation, which were assessed at 1, 2, 3 and 4 months following the onset of canine retraction. Also the duration of canine retraction was recorded. Random sequence: Computer-generated random numbers. Allocation concealment: sequentially numbered, opaque, sealed envelopes. Blinding: Single blinded (outcomes' assessor). Seventeen patients in each group were enrolled in the statistical analysis. The rate of canine retraction was significantly greater in the experimental side than in the control side in both groups by two-fold in the first month and 1.5-fold in the second month (p < 0.001). Also the overall canine retraction duration was significantly reduced in the experimental side as compared with control side in both groups about 25% (p ≤ 0.001). There were no significant differences between the experimental and the control sides regarding loss of anchorage and upper canine rotation in both groups (p > 0.05). There were no significant differences between the two flapless techniques regarding the studied variables during all evaluation times (p > 0.05). Piezocision and laser-assisted flapless corticotomy appeared to be effective treatment methods for accelerating canine retraction without any significant untoward effect on anchorage or canine rotation during rapid retraction. ClinicalTrials.gov (Identifier: NCT02606331 ).
Menéndez-Nieto, Isabel; Cervera-Ballester, Juan; Maestre-Ferrín, Laura; Blaya-Tárraga, Juan Antonio; Peñarrocha-Oltra, David; Peñarrocha-Diago, Miguel
2016-11-01
Adequate bleeding control is essential for the success of periapical surgery. The aim of this study was to evaluate the effects of 2 hemostatic agents on the outcome of periapical surgery and their relationship with patient and teeth parameters. A prospective study was designed with 2 randomized parallel groups, depending on the hemostatic agent used: gauze impregnated in epinephrine (epinephrine group) and aluminum chloride (aluminum chloride group). The analysis of the hemorrhage control was judged before and after the application of the hemostatic agents by the surgeon, and 2 examiners independently recorded it as adequate (complete hemorrhage control) or inadequate (incomplete hemorrhage control). Ninety-nine patients with a periradicular lesion were enrolled in this study and divided into 2 groups: gauze impregnated in epinephrine in 48 patients (epinephrine group) or aluminum chloride in 51 (aluminum chloride group). In epinephrine group adequate hemostasis was achieved in 25 cases, and in aluminum chloride group it was achieved in 37 cases (P < .05). The outcome was better in the aluminum chloride group than in the gauze impregnated in epinephrine group. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
14 CFR 25.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2010 CFR
2010-01-01
... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...
14 CFR 25.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2012 CFR
2012-01-01
... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...
14 CFR 25.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2011 CFR
2011-01-01
... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...
14 CFR 25.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2014 CFR
2014-01-01
... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...
14 CFR 25.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2013 CFR
2013-01-01
... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...
Development of structural schemes of parallel structure manipulators using screw calculus
NASA Astrophysics Data System (ADS)
Rashoyan, G. V.; Shalyukhin, K. A.; Gaponenko, EV
2018-03-01
The paper considers the approach to the structural analysis and synthesis of parallel structure robots based on the mathematical apparatus of groups of screws and on a concept of reciprocity of screws. The results are depicted of synthesis of parallel structure robots with different numbers of degrees of freedom, corresponding to the different groups of screws. Power screws are applied with this aim, based on the principle of static-kinematic analogy; the power screws are similar to the orts of axes of not driven kinematic pairs of a corresponding connecting chain. Accordingly, kinematic screws of the outlet chain of a robot are simultaneously determined which are reciprocal to power screws of kinematic sub-chains. Solution of certain synthesis problems is illustrated with practical applications. Closed groups of screws can have eight types. The three-membered groups of screws are of greatest significance, as well as four-membered screw groups [1] and six-membered screw groups. Three-membered screw groups correspond to progressively guiding mechanisms, to spherical mechanisms, and to planar mechanisms. The four-membered group corresponds to the motion of the SCARA robot. The six-membered group includes all possible motions. From the works of A.P. Kotelnikov, F.M. Dimentberg, it is known that closed fifth-order screw groups do not exist. The article presents examples of the mechanisms corresponding to the given groups.
A modular case-mix classification system for medical rehabilitation illustrated.
Stineman, M G; Granger, C V
1997-01-01
The authors present a modular set of patient classification systems designed for medical rehabilitation that predict resource use and outcomes for clinically similar groups of individuals. The systems, based on the Functional Independence Measure, are referred to as Function-Related Groups (FIM-FRGs). Using data from 23,637 lower extremity fracture patients from 458 inpatient medical rehabilitation facilities, 1995 benchmarks are provided and illustrated for length of stay, functional outcome, and discharge to home and skilled nursing facilities (SNFs). The FIM-FRG modules may be used in parallel to study interactions between resource use and quality and could ultimately yield an integrated strategy for payment and outcomes measurement. This could position the rehabilitation community to take a pioneering role in the application of outcomes-based clinical indicators.
AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System
NASA Astrophysics Data System (ADS)
Wang, R.; Harris, C.; Wicenec, A.
2016-07-01
In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.
Broadcasting collective operation contributions throughout a parallel computer
Faraj, Ahmad [Rochester, MN
2012-02-21
Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.
Parallel design patterns for a low-power, software-defined compressed video encoder
NASA Astrophysics Data System (ADS)
Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar
2011-06-01
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.
Kramer, A; Roth, B; Müller, G; Rudolph, P; Klöcker, N
2004-01-01
The main target of the combination of octenidine with phenoxyethanol (Octenisept) is the antisepsis of acute wounds, whereas polyhexanide combined with polyethylene glycol in Ringer solution (Lavasept) is the agent of choice for antisepsis of chronic wounds and burns. Because comparative data for both agents on the effects on wound healing are lacking, we investigated the influence of preparations based on polyhexanide and octenidine versus placebo (Ringer solution) in experimental superficial aseptic skin wounds (n = 108) of 20 mm diameter, using a double-blind, randomised, stratified, controlled, parallel-group design in piglets. Computerised planimetry and histopathological methods were used for the assessment of wound healing. Histologically, no significant differences could be verified at any time between the 3 groups. However, in the early phase (day 9 after wounding), the octenidine-based product retarded wound contraction to a significantly greater extent than placebo and polyhexanide, whereas in the later phase (days 18 and 28), polyhexanide promoted contraction significantly more than did placebo and octenidine. The consequence is complete wound closure after 22.9 days using polyhexanide, in comparison to the placebo after 24.1 days (p < 0.05) and octenidine after 28.3 days (no statistical difference to placebo). This may be explained by the better tolerance of polyhexanide in vitro, which was demonstrated with dose and time dependence in cytotoxicity tests on human amnion cells. Copyright 2004 S. Karger AG, Basel
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
An object-oriented approach to nested data parallelism
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.; Chatterjee, Siddhartha
1994-01-01
This paper describes an implementation technique for integrating nested data parallelism into an object-oriented language. Data-parallel programming employs sets of data called 'collections' and expresses parallelism as operations performed over the elements of a collection. When the elements of a collection are also collections, then there is the possibility for 'nested data parallelism.' Few current programming languages support nested data parallelism however. In an object-oriented framework, a collection is a single object. Its type defines the parallel operations that may be applied to it. Our goal is to design and build an object-oriented data-parallel programming environment supporting nested data parallelism. Our initial approach is built upon three fundamental additions to C++. We add new parallel base types by implementing them as classes, and add a new parallel collection type called a 'vector' that is implemented as a template. Only one new language feature is introduced: the 'foreach' construct, which is the basis for exploiting elementwise parallelism over collections. The strength of the method lies in the compilation strategy, which translates nested data-parallel C++ into ordinary C++. Extracting the potential parallelism in nested 'foreach' constructs is called 'flattening' nested parallelism. We show how to flatten 'foreach' constructs using a simple program transformation. Our prototype system produces vector code which has been successfully run on workstations, a CM-2, and a CM-5.
A fast pulse design for parallel excitation with gridding conjugate gradient.
Feng, Shuo; Ji, Jim
2013-01-01
Parallel excitation (pTx) is recognized as a crucial technique in high field MRI to address the transmit field inhomogeneity problem. However, it can be time consuming to design pTx pulses which is not desirable. In this work, we propose a pulse design with gridding conjugate gradient (CG) based on the small-tip-angle approximation. The two major time consuming matrix-vector multiplications are substituted by two operators which involves with FFT and gridding only. Simulation results have shown that the proposed method is 3 times faster than conventional method and the memory cost is reduced by 1000 times.
NASA Technical Reports Server (NTRS)
Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce
1996-01-01
A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.
Bringas, Maria L.; Zaldivar, Marilyn; Rojas, Pedro A.; Martinez-Montes, Karelia; Chongo, Dora M.; Ortega, Maria A.; Galvizu, Reynaldo; Perez, Alba E.; Morales, Lilia M.; Maragoto, Carlos; Vera, Hector; Galan, Lidice; Besson, Mireille; Valdes-Sosa, Pedro A.
2015-01-01
This study was a two-armed parallel group design aimed at testing real world effectiveness of a music therapy (MT) intervention for children with severe neurological disorders. The control group received only the standard neurorestoration program and the experimental group received an additional MT “Auditory Attention plus Communication protocol” just before the usual occupational and speech therapy. Multivariate Item Response Theory (MIRT) identified a neuropsychological status-latent variable manifested in all children and which exhibited highly significant changes only in the experimental group. Changes in brain plasticity also occurred in the experimental group, as evidenced using a Mismatch Event Related paradigm which revealed significant post intervention positive responses in the latency range between 308 and 400 ms in frontal regions. LORETA EEG source analysis identified prefrontal and midcingulate regions as differentially activated by the MT in the experimental group. Taken together, our results showing improved attention and communication as well as changes in brain plasticity in children with severe neurological impairments, confirm the importance of MT for the rehabilitation of patients across a wide range of dysfunctions. PMID:26582974
Bringas, Maria L; Zaldivar, Marilyn; Rojas, Pedro A; Martinez-Montes, Karelia; Chongo, Dora M; Ortega, Maria A; Galvizu, Reynaldo; Perez, Alba E; Morales, Lilia M; Maragoto, Carlos; Vera, Hector; Galan, Lidice; Besson, Mireille; Valdes-Sosa, Pedro A
2015-01-01
This study was a two-armed parallel group design aimed at testing real world effectiveness of a music therapy (MT) intervention for children with severe neurological disorders. The control group received only the standard neurorestoration program and the experimental group received an additional MT "Auditory Attention plus Communication protocol" just before the usual occupational and speech therapy. Multivariate Item Response Theory (MIRT) identified a neuropsychological status-latent variable manifested in all children and which exhibited highly significant changes only in the experimental group. Changes in brain plasticity also occurred in the experimental group, as evidenced using a Mismatch Event Related paradigm which revealed significant post intervention positive responses in the latency range between 308 and 400 ms in frontal regions. LORETA EEG source analysis identified prefrontal and midcingulate regions as differentially activated by the MT in the experimental group. Taken together, our results showing improved attention and communication as well as changes in brain plasticity in children with severe neurological impairments, confirm the importance of MT for the rehabilitation of patients across a wide range of dysfunctions.
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
The BLAZE language - A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1987-01-01
A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.
A model for optimizing file access patterns using spatio-temporal parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk
2013-01-01
For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less
Gooding, Thomas Michael [Rochester, MN
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
Effects of a probiotic intervention in acute canine gastroenteritis--a controlled clinical trial.
Herstad, H K; Nesheim, B B; L'Abée-Lund, T; Larsen, S; Skancke, E
2010-01-01
To evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs. Thirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis. The time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode. The probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.
The impact of arm position on the measurement of orthostatic blood pressure.
Guss, David A; Abdelnur, Diego; Hemingway, Thomas J
2008-05-01
Blood pressure is a standard vital sign in patients evaluated in an Emergency Department. The American Heart Association has recommended a preferred position of the arm and cuff when measuring blood pressure. There is no formal recommendation for arm position when measuring orthostatic blood pressure. The objective of this study was to assess the impact of different arm positions on the measurement of postural changes in blood pressure. This was a prospective, unblinded, convenience study involving Emergency Department patients with complaints unrelated to cardiovascular instability. Repeated blood pressure measurements were obtained using an automatic non-invasive device with each subject in a supine and standing position and with the arm parallel and perpendicular to the torso. Orthostatic hypotension was defined as a difference of >or= 20 mm Hg systolic or >or= 10 mm Hg diastolic when subtracting standing from supine measurements. There were four comparisons made: group W, arm perpendicular supine and standing; group X, arm parallel supine and standing; group Y, arm parallel supine and perpendicular standing; and group Z, arm perpendicular supine and parallel standing. There were 100 patients enrolled, 55 men, mean age 44 years. Four blood pressure measurements were obtained on each patient. The percentage of patients meeting orthostatic hypotension criteria in each group was: W systolic 6% (95% CI 1%, 11%), diastolic 4% (95% CI 0%, 8%), X systolic 8% (95% CI 3%, 13%), diastolic 9% (95% CI 3%, 13%), Y systolic 19% (95% CI 11%, 27%), diastolic 30% (95% CI 21%, 39%), Z systolic 2% (95% CI 0%, 5%), diastolic 2% (95% CI 0%, 5%). Comparison of Group Y vs. X, Z, and W was statistically significant (p < 0.0001). Arm position has a significant impact on determination of postural changes in blood pressure. The combination of the arm parallel when supine and perpendicular when standing may significantly overestimate the orthostatic change. Arm position should be held constant in supine and standing positions when assessing for orthostatic change in blood pressure.
A Bayesian adaptive design for biomarker trials with linked treatments.
Wason, James M S; Abraham, Jean E; Baird, Richard D; Gournaris, Ioannis; Vallier, Anne-Laure; Brenton, James D; Earl, Helena M; Mander, Adrian P
2015-09-01
Response to treatments is highly heterogeneous in cancer. Increased availability of biomarkers and targeted treatments has led to the need for trial designs that efficiently test new treatments in biomarker-stratified patient subgroups. We propose a novel Bayesian adaptive randomisation (BAR) design for use in multi-arm phase II trials where biomarkers exist that are potentially predictive of a linked treatment's effect. The design is motivated in part by two phase II trials that are currently in development. The design starts by randomising patients to the control treatment or to experimental treatments that the biomarker profile suggests should be active. At interim analyses, data from treated patients are used to update the allocation probabilities. If the linked treatments are effective, the allocation remains high; if ineffective, the allocation changes over the course of the trial to unlinked treatments that are more effective. Our proposed design has high power to detect treatment effects if the pairings of treatment with biomarker are correct, but also performs well when alternative pairings are true. The design is consistently more powerful than parallel-groups stratified trials. This BAR design is a powerful approach to use when there are pairings of biomarkers with treatments available for testing simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaves, Mario Paul
2017-07-01
For my project I have selected to research and design a high current pulse system, which will be externally triggered from a 5V pulse. The research will be conducted in the region of paralleling the solid state switches for a higher current output, as well as to see if there will be any other advantages in doing so. The end use of the paralleled solid state switches will be used on a Capacitive Discharge Unit (CDU). For the first part of my project, I have set my focus on the design of the circuit, selection of components, and simulation ofmore » the circuit.« less
An Old Story in the Parallel Synthesis World: An Approach to Hydantoin Libraries.
Bogolubsky, Andrey V; Moroz, Yurii S; Savych, Olena; Pipko, Sergey; Konovets, Angelika; Platonov, Maxim O; Vasylchenko, Oleksandr V; Hurmach, Vasyl V; Grygorenko, Oleksandr O
2018-01-08
An approach to the parallel synthesis of hydantoin libraries by reaction of in situ generated 2,2,2-trifluoroethylcarbamates and α-amino esters was developed. To demonstrate utility of the method, a library of 1158 hydantoins designed according to the lead-likeness criteria (MW 200-350, cLogP 1-3) was prepared. The success rate of the method was analyzed as a function of physicochemical parameters of the products, and it was found that the method can be considered as a tool for lead-oriented synthesis. A hydantoin-bearing submicromolar primary hit acting as an Aurora kinase A inhibitor was discovered with a combination of rational design, parallel synthesis using the procedures developed, in silico and in vitro screenings.
Effects of intensive short-term dynamic psychotherapy on social cognition in major depression.
Ajilchi, Bita; Kisely, Steve; Nejati, Vahid; Frederickson, Jon
2018-05-23
Social cognition is commonly affected in psychiatric disorders and is a determinant of quality of life. However, there are few studies of treatment. To investigate the efficacy of intensive short-term dynamic psychotherapy on social cognition in major depression. This study used a parallel group randomized control design to compare pre-test and post-test social cognition scores between depressed participants receiving ISTDP and those allocated to a wait-list control group. Participants were adults (19-40 years of age) who were diagnosed with depression. We recruited 32 individuals, with 16 participants allocated to the ISTDP and control groups, respectively. Both groups were similar in terms of age, sex and educational level. Multivariate analysis of variance (MANOVA) demonstrated that the intervention was effective in terms of the total score of social cognition: the experimental group had a significant increase in the post-test compared to the control group. In addition, the experimental group showed a significant reduction in the negative subjective score compared to the control group as well as an improvement in response to positive neutral and negative states. Depressed patients receiving ISTDP show a significant improvement in social cognition post treatment compared to a wait-list control group.
Gholamzadeh Baeis, Mehdi; Amiri, Ghasem; Miladinia, Mojtaba
2017-01-01
This study examines the effect of the addition of IMOD, a novel multi-herbal drug to the highly active anti-retroviral therapy (HAART) regimen, on the immunological status of HIV-positive patients. A randomized two-parallel-group (HAART group versus HAART+IMOD group), pretest-posttest design was used.Sixty patients with indications for treatment with the HAART regimen participated. One week before and 2 days after the treatments, immunological parameters including total lymphocyte count (TLC) and CD4 cell count were assessed.The intervention group received the HAART regimen plus IMOD every day for 3 months. The control group received only the HAART regimen every day for 3 months. In the intervention group, a significant difference was observed in CD4between before and after drug therapy (CD4 was increased). However, in the control group, the difference in CD4 was not significant before and after drug therapy. The difference in TLC was not significantly different between the two groups before and after therapy. Nevertheless, TLC was higher in the intervention group. IMOD (as a herbal drug) has been successfully added to the HAART regimen to improve the immunological status of HIV-positive patients.
Ant-like task allocation and recruitment in cooperative robots
NASA Astrophysics Data System (ADS)
Krieger, Michael J. B.; Billeter, Jean-Bernard; Keller, Laurent
2000-08-01
One of the greatest challenges in robotics is to create machines that are able to interact with unpredictable environments in real time. A possible solution may be to use swarms of robots behaving in a self-organized manner, similar to workers in an ant colony. Efficient mechanisms of division of labour, in particular series-parallel operation and transfer of information among group members, are key components of the tremendous ecological success of ants. Here we show that the general principles regulating division of labour in ant colonies indeed allow the design of flexible, robust and effective robotic systems. Groups of robots using ant-inspired algorithms of decentralized control techniques foraged more efficiently and maintained higher levels of group energy than single robots. But the benefits of group living decreased in larger groups, most probably because of interference during foraging. Intriguingly, a similar relationship between group size and efficiency has been documented in social insects. Moreover, when food items were clustered, groups where robots could recruit other robots in an ant-like manner were more efficient than groups without information transfer, suggesting that group dynamics of swarms of robots may follow rules similar to those governing social insects.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm
Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay
2012-01-01
A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747
Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin
2016-01-01
Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.
Performance of the Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Nagao, Takehiko; Toyoda, Kazunori; Kitagawa, Kazuo; Kitazono, Takanari; Yamagami, Hiroshi; Uchiyama, Shinichiro; Tanahashi, Norio; Matsumoto, Masayasu; Minematsu, Kazuo; Nagata, Izumi; Nishikawa, Masakatsu; Nanto, Shinsuke; Abe, Kenji; Ikeda, Yasuo; Ogawa, Akira
2018-04-01
This comparison of PRAsugrel and clopidogrel in Japanese patients with ischemic STROke (PRASTRO)-I trial investigates the noninferiority of prasugrel to clopidogrel sulfate in the prevention of recurrence of primary events (ischemic stroke, myocardial infarction, and death from other vascular causes), and the long-term safety of prasugrel in Japanese patients with non-cardioembolic stroke. This was an active-controlled, randomized, double-blind, double-dummy, parallel-group study conducted between July 2011 and March 2016 at multiple centers around Japan. Patients had to meet eligibility criteria before receiving 3.75 mg prasugrel or 75 mg clopidogrel orally once daily for a period of 96-104 weeks. A total of 3747 patients were included in this trial; 1598 in the 3.75 mg prasugrel group and 1551 in the 75 mg clopidogrel group completed the study. During the study period, 287 (15.2%) patients in the prasugrel group and 311 (16.7%) in the clopidogrel group discontinued treatment. Baseline characteristics, safety, and efficacy results are forthcoming and will be published separately. This article presents the study design and rationale for a trial investigating the noninferiority of prasugrel to clopidogrel sulfate with regards to the inhibitory effect on primary events in patients with non-cardioembolic stroke.
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.
1999-01-01
Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Optimization of a new flow design for solid oxide cells using computational fluid dynamics modelling
NASA Astrophysics Data System (ADS)
Duhn, Jakob Dragsbæk; Jensen, Anker Degn; Wedel, Stig; Wix, Christian
2016-12-01
Design of a gas distributor to distribute gas flow into parallel channels for Solid Oxide Cells (SOC) is optimized, with respect to flow distribution, using Computational Fluid Dynamics (CFD) modelling. The CFD model is based on a 3d geometric model and the optimized structural parameters include the width of the channels in the gas distributor and the area in front of the parallel channels. The flow of the optimized design is found to have a flow uniformity index value of 0.978. The effects of deviations from the assumptions used in the modelling (isothermal and non-reacting flow) are evaluated and it is found that a temperature gradient along the parallel channels does not affect the flow uniformity, whereas a temperature difference between the channels does. The impact of the flow distribution on the maximum obtainable conversion during operation is also investigated and the obtainable overall conversion is found to be directly proportional to the flow uniformity. Finally the effect of manufacturing errors is investigated. The design is shown to be robust towards deviations from design dimensions of at least ±0.1 mm which is well within obtainable tolerances.
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
Formalization, equivalence and generalization of basic resonance electrical circuits
NASA Astrophysics Data System (ADS)
Penev, Dimitar; Arnaudov, Dimitar; Hinov, Nikolay
2017-12-01
In the work are presented basic resonance circuits, which are used in resonance energy converters. The following resonant circuits are considered: serial, serial with parallel load parallel capacitor, parallel and parallel with serial loaded inductance. For the circuits under consideration, expressions are generated for the frequencies of own oscillations and for the equivalence of the active power emitted in the load. Mathematical expressions are graphically constructed and verified using computer simulations. The results obtained are used in the model based design of resonant energy converters with DC or AC output. This guaranteed the output indicators of power electronic devices.
Programming parallel architectures: The BLAZE family of languages
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush
1988-01-01
Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
LDRD final report on massively-parallel linear programming : the parPCx system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less
Performance of the SERI parallel-passage dehumidifer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlepp, D.; Barlow, R.
1984-09-01
The key component in improving the performance of solar desiccant cooling systems is the dehumidifier. A parallel-passage geometry for the desiccant dehumidifier has been identified as meeting key criteria of low pressure drop, high mass transfer efficiency, and compact size. An experimental program to build and test a small-scale prototype of this design was undertaken in FY 1982, and the results are presented in this report. Computer models to predict the adsorption/desorption behavior of desiccant dehumidifiers were updated to take into account the geometry of the bed and predict potential system performance using the new component design. The parallel-passage designmore » proved to have high mass transfer effectiveness and low pressure drop over a wide range of test conditions typical of desiccant cooling system operation. The prototype dehumidifier averaged 93% effectiveness at pressure drops of less than 50 Pa at design point conditions. Predictions of system performance using models validated with the experimental data indicate that system thermal coefficients of performance (COPs) of 1.0 to 1.2 and electrical COPs above 8.5 are possible using this design.« less
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Yotebieng, Marcel; Behets, Frieda; Kawende, Bienvenu; Ravelomanana, Noro Lantoniaina Rosa; Tabala, Martine; Okitolonda, Emile W
2017-04-26
Despite the rapid adoption of the World Health Organization's 2013 guidelines, children continue to be infected with HIV perinatally because of sub-optimal adherence to the continuum of HIV care in maternal and child health (MCH) clinics. To achieve the UNAIDS goal of eliminating mother-to-child HIV transmission, multiple, adaptive interventions need to be implemented to improve adherence to the HIV continuum. The aim of this open label, parallel, group randomized trial is to evaluate the effectiveness of Continuous Quality Improvement (CQI) interventions implemented at facility and health district levels to improve retention in care and virological suppression through 24 months postpartum among pregnant and breastfeeding women receiving ART in MCH clinics in Kinshasa, Democratic Republic of Congo. Prior to randomization, the current monitoring and evaluation system will be strengthened to enable collection of high quality individual patient-level data necessary for timely indicators production and program outcomes monitoring to inform CQI interventions. Following randomization, in health districts randomized to CQI, quality improvement (QI) teams will be established at the district level and at MCH clinics level. For 18 months, QI teams will be brought together quarterly to identify key bottlenecks in the care delivery system using data from the monitoring system, develop an action plan to address those bottlenecks, and implement the action plan at the level of their district or clinics. If proven to be effective, CQI as designed here, could be scaled up rapidly in resource-scarce settings to accelerate progress towards the goal of an AIDS free generation. The protocol was retrospectively registered on February 7, 2017. ClinicalTrials.gov Identifier: NCT03048669 .
Thunström, Erik; Manhem, Karin; Rosengren, Annika; Peker, Yüksel
2016-02-01
Obstructive sleep apnea (OSA) is common in people with hypertension, particularly resistant hypertension. Treatment with an antihypertensive agent alone is often insufficient to control hypertension in patients with OSA. To determine whether continuous positive airway pressure (CPAP) added to treatment with an antihypertensive agent has an impact on blood pressure (BP) levels. During the initial 6-week, two-center, open, prospective, case-control, parallel-design study (2:1; OSA/no-OSA), all patients began treatment with an angiotensin II receptor antagonist, losartan, 50 mg daily. In the second 6-week, sex-stratified, open, randomized, parallel-design study of the OSA group, all subjects continued to receive losartan and were randomly assigned to either nightly CPAP as add-on therapy or no CPAP. Twenty-four-hour BP monitoring included assessment every 15 minutes during daytime hours and every 20 minutes during the night. Ninety-one patients with untreated hypertension underwent a home sleep study (55 were found to have OSA; 36 were not). Losartan significantly reduced systolic, diastolic, and mean arterial BP in both groups (without OSA: 12.6, 7.2, and 9.0 mm Hg; with OSA: 9.8, 5.7, and 6.1 mm Hg). Add-on CPAP treatment had no significant changes in 24-hour BP values but did reduce nighttime systolic BP by 4.7 mm Hg. All 24-hour BP values were reduced significantly in the 13 patients with OSA who used CPAP at least 4 hours per night. Losartan reduced BP in OSA, but the reductions were less than in no-OSA. Add-on CPAP therapy resulted in no significant changes in 24-hour BP measures except in patients using CPAP efficiently. Clinical trial registered with www.clinicaltrials.gov (NCT00701428).
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
Design of high-performance parallelized gene predictors in MATLAB.
Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien
2012-04-10
This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.
Increasing airport capacity with modified IFR approach procedures for close-spaced parallel runways
DOT National Transportation Integrated Search
2001-01-01
Because of wake turbulence considerations, current instrument approach : procedures treat close-spaced (i.e., less than 2,500 feet apart) parallel run : ways as a single runway. This restriction is designed to assure safety for all : aircraft types u...
Integrated Joule switches for the control of current dynamics in parallel superconducting strips
NASA Astrophysics Data System (ADS)
Casaburi, A.; Heath, R. M.; Cristiano, R.; Ejrnaes, M.; Zen, N.; Ohkubo, M.; Hadfield, R. H.
2018-06-01
Understanding and harnessing the physics of the dynamic current distribution in parallel superconducting strips holds the key to creating next generation sensors for single molecule and single photon detection. Non-uniformity in the current distribution in parallel superconducting strips leads to low detection efficiency and unstable operation, preventing the scale up to large area sensors. Recent studies indicate that non-uniform current distributions occurring in parallel strips can be understood and modeled in the framework of the generalized London model. Here we build on this important physical insight, investigating an innovative design with integrated superconducting-to-resistive Joule switches to break the superconducting loops between the strips and thus control the current dynamics. Employing precision low temperature nano-optical techniques, we map the uniformity of the current distribution before- and after the resistive strip switching event, confirming the effectiveness of our design. These results provide important insights for the development of next generation large area superconducting strip-based sensors.
Evaluating the performance of parallel subsurface simulators: An illustrative example with PFLOTRAN
Hammond, G E; Lichtner, P C; Mills, R T
2014-01-01
[1] To better inform the subsurface scientist on the expected performance of parallel simulators, this work investigates performance of the reactive multiphase flow and multicomponent biogeochemical transport code PFLOTRAN as it is applied to several realistic modeling scenarios run on the Jaguar supercomputer. After a brief introduction to the code's parallel layout and code design, PFLOTRAN's parallel performance (measured through strong and weak scalability analyses) is evaluated in the context of conceptual model layout, software and algorithmic design, and known hardware limitations. PFLOTRAN scales well (with regard to strong scaling) for three realistic problem scenarios: (1) in situ leaching of copper from a mineral ore deposit within a 5-spot flow regime, (2) transient flow and solute transport within a regional doublet, and (3) a real-world problem involving uranium surface complexation within a heterogeneous and extremely dynamic variably saturated flow field. Weak scalability is discussed in detail for the regional doublet problem, and several difficulties with its interpretation are noted. PMID:25506097
Evaluating the performance of parallel subsurface simulators: An illustrative example with PFLOTRAN.
Hammond, G E; Lichtner, P C; Mills, R T
2014-01-01
[1] To better inform the subsurface scientist on the expected performance of parallel simulators, this work investigates performance of the reactive multiphase flow and multicomponent biogeochemical transport code PFLOTRAN as it is applied to several realistic modeling scenarios run on the Jaguar supercomputer. After a brief introduction to the code's parallel layout and code design, PFLOTRAN's parallel performance (measured through strong and weak scalability analyses) is evaluated in the context of conceptual model layout, software and algorithmic design, and known hardware limitations. PFLOTRAN scales well (with regard to strong scaling) for three realistic problem scenarios: (1) in situ leaching of copper from a mineral ore deposit within a 5-spot flow regime, (2) transient flow and solute transport within a regional doublet, and (3) a real-world problem involving uranium surface complexation within a heterogeneous and extremely dynamic variably saturated flow field. Weak scalability is discussed in detail for the regional doublet problem, and several difficulties with its interpretation are noted.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
Gathmann, Bettina; Schulte, Frank P; Maderwald, Stefan; Pawlikowski, Mirko; Starcke, Katrin; Schäfer, Lena C; Schöler, Tobias; Wolf, Oliver T; Brand, Matthias
2014-03-01
Stress and additional load on the executive system, produced by a parallel working memory task, impair decision making under risk. However, the combination of stress and a parallel task seems to preserve the decision-making performance [e.g., operationalized by the Game of Dice Task (GDT)] from decreasing, probably by a switch from serial to parallel processing. The question remains how the brain manages such demanding decision-making situations. The current study used a 7-tesla magnetic resonance imaging (MRI) system in order to investigate the underlying neural correlates of the interaction between stress (induced by the Trier Social Stress Test), risky decision making (GDT), and a parallel executive task (2-back task) to get a better understanding of those behavioral findings. The results show that on a behavioral level, stressed participants did not show significant differences in task performance. Interestingly, when comparing the stress group (SG) with the control group, the SG showed a greater increase in neural activation in the anterior prefrontal cortex when performing the 2-back task simultaneously with the GDT than when performing each task alone. This brain area is associated with parallel processing. Thus, the results may suggest that in stressful dual-tasking situations, where a decision has to be made when in parallel working memory is demanded, a stronger activation of a brain area associated with parallel processing takes place. The findings are in line with the idea that stress seems to trigger a switch from serial to parallel processing in demanding dual-tasking situations.
Moll, Sandra; Patten, Scott Burton; Stuart, Heather; Kirsh, Bonnie; MacDermid, Joy Christine
2015-04-16
Mental illness is a significant and growing problem in Canadian healthcare organizations, leading to tremendous personal, social and financial costs for individuals, their colleagues, their employers and their patients. Early and appropriate intervention is needed, but unfortunately, few workers get the help that they need in a timely way due to barriers related to poor mental health literacy, stigma, and inadequate access to mental health services. Workplace education and training is one promising approach to early identification and support for workers who are struggling. Little is known, however, about what approach is most effective, particularly in the context of healthcare work. The purpose of this study is to compare the impact of a customized, contact-based education approach with standard mental health literacy training on the mental health knowledge, stigmatized beliefs and help-seeking/help-outreach behaviors of healthcare employees. A multi-centre, randomized, two-group parallel group trial design will be adopted. Two hundred healthcare employees will be randomly assigned to one of two educational interventions: Beyond Silence, a peer-led program customized to the healthcare workplace, and Mental Health First Aid, a standardized literacy based training program. Pre, post and 3-month follow-up surveys will track changes in knowledge (mental health literacy), attitudes towards mental illness, and help-seeking/help-outreach behavior. An intent-to-treat, repeated measures analysis will be conducted to compare changes in the two groups over time in terms of the primary outcome of behavior change. Linear regression modeling will be used to explore the extent to which knowledge, and attitudes predict behavior change. Qualitative interviews with participants and leaders will also be conducted to examine process and implementation of the programs. This is one of the first experimental studies to compare outcomes of standard mental health literacy training to an intervention with an added anti-stigma component (using best-practices of contact-based education). Study findings will inform recommendations for designing workplace mental health education to promote early intervention for employees with mental health issues in the context of healthcare work. May 2014 - ClinicalTrials.gov: NCT02158871.
Soto-Quiros, Pablo
2015-01-01
This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Yang, Yiling; Yan, Xiaoxia; Deng, Hongmei; Zeng, Dian; Huang, Jianpeng; Fu, Wenbin; Xu, Nenggui; Liu, Jianhua
2017-07-10
A large number of randomized trials on the use of acupuncture to treat chronic pain have been conducted. However, there is considerable controversy regarding the effectiveness of acupuncture. We designed a randomized trial involving patients with chronic neck pain (CNP) to investigate whether acupuncture is more effective than a placebo in treating CNP. A five-arm, parallel, single-blinded, randomized, sham-controlled trial was designed. Patients with CNP of more than 3 months' duration are being recruited from Guangdong Provincial Hospital of Chinese Medicine (China). Following examination, 175 patients will be randomized into one of five groups (35 patients in each group) as follows: a traditional acupuncture group (group A), a shallow-puncture group (group B), a non-acupoint acupuncture group (group C), a non-acupoint shallow-puncture group (group D) and a sham-puncture group (group E). The interventions will last for 20 min and will be carried out twice a week for 5 weeks. The primary outcome will be evaluated by changes in the Northwick Park Neck Pain Questionnaire (NPQ). Secondary outcomes will be measured by the pain threshold, the Short Form McGill Pain Questionnaire-2 (SF-MPQ-2), the 36-Item Short-Form Health Survey (SF-36) and diary entries. Analysis of the data will be performed at baseline, at the end of the intervention and at 3 months' follow-up. The safety of acupuncture will be evaluated at each treatment period. The purpose of this trial is to determine whether traditional acupuncture is more effective for chronic pain relief than sham acupuncture in adults with CNP, and to determine which type of sham acupuncture is the optimal control for clinical trials. Chinese Clinical Trial Registry: ChiCTR-IOR-15006886 . Registered on 2 July 2015.
Roller-gear drives for robotic manipulators design, fabrication and test
NASA Technical Reports Server (NTRS)
Anderson, William J.; Shipitalo, William
1991-01-01
Two single axis planetary roller-gear drives and a two axis roller-gear drive with dual inputs were designed for use as robotic transmissions. Each of the single axis drives is a two planet row, four planet arrangement with spur gears and compressively loaded cylindrical rollers acting in parallel. The two axis drive employs bevel gears and cone rollers acting in parallel. The rollers serve a dual function: they remove backlash from the system, and they transmit torque when the gears are not fully engaged.
The Area-Time Complexity of Sorting.
1984-12-01
suggests a classification of keys into short (k < logn), long (k > 2 logn), and of medium length. Optimal or near-optimal designs of VLSI sorters are...suggests a classification of keys into short (k 4 logn ), long (k > 21ogn ), and of medium length. Optimal or near-optimal designs of VLSI sorters are...ARCHITECTURES 79 5.1 Introduction 79 5.2 Parallel Algorithms for Sorting 80 . 5.3 Parallel Architectures 88 6 OPTIMAL VLSI SORTERS FOR KEYS OF LENGTH k - logn
Experimental entangled photon pair generation using crystals with parallel optical axes.
Villar, Aitor; Lohrmann, Alexander; Ling, Alexander
2018-05-14
We present an optical design where polarization-entangled photon pairs are generated within two β-Barium Borate crystals whose optical axes are parallel. This design increases the spatial mode overlap of the emitted photon pairs enhancing single mode collection without the need for additional spatial walk-off compensators. The observed photon pair rate is at least 65 000 pairs/s/mW with a quantum state fidelity of 99.53 ± 0.22% when pumped with an elliptical spatial profile.
A multioutput LLC-type parallel resonant converter
NASA Astrophysics Data System (ADS)
Liu, Rui; Lee, C. Q.; Upadhyay, Anand K.
1992-07-01
When an LLC-type parallel resonant converter (LLC-PRC) operates above resonant frequency, the switching transistors can be turned off at zero voltage. Further study reveals that the LLC-PRC possesses the advantage of lower converter voltage gain as compared with the conventional PRC. Based on analytic results, a complete set of design curves is obtained, from which a systematic design procedure is developed. Experimental results from a 150 W 150 kHz multioutput LLC-type PRC power supply are presented.
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.
Experimental entangled photon pair generation using crystals with parallel optical axes
NASA Astrophysics Data System (ADS)
Villar, Aitor; Lohrmann, Alexander; Ling, Alexander
2018-05-01
We present an optical design where polarization-entangled photon pairs are generated within two $\\beta$-Barium Borate crystals whose optical axes are parallel. This design increases the spatial mode overlap of the emitted photon pairs enhancing single mode collection without the need for additional spatial walk-off compensators. The observed photon pair rate is at least 65000 pairs/s/mW with a quantum state fidelity of 99.53$\\pm$0.22% when pumped with an elliptical spatial profile.
Unilateral posterior crossbite and mastication.
Rilo, Benito; da Silva, José Luis; Mora, María Jesús; Cadarso-Suárez, Carmen; Santana, Urbano
2007-05-01
This study was designed to characterize masticatory-cycle morphology, and distance of the contact glide in the closing masticatory stroke, in adult subjects with uncorrected unilateral posterior crossbite (UPXB), comparing the results obtained with those obtained in a parallel group of normal subjects. Mandibular movements (masticatory movements and laterality movements with dental contact) were registered using a gnathograph (MK-6I Diagnostic System) during unilateral chewing of a piece of gum. Traces were recorded on the crossbite and non-crossbite sides in the crossbite group, and likewise on both sides in the non-crossbite group. Mean contact glide distance on the crossbite side in the UPXB group was significantly lower than in the control group (p<0.001), and mean contact glide distance on the non-crossbite side in the UPXB group was significantly lower than in the control group (p=0.042). Cycle morphology was abnormal during chewing on the crossbite side, with the frequency distribution of cycle types differing significantly from that for the noncrossbite side and that for the control group (p<0.001). Patients with crossbite showed alterations in both contact glide distances and masticatory cycle morphology. These alterations are probably adaptive responses allowing maintenance of adequate masticatory function despite the crossbite.
Kircik, Leon H
2009-07-01
This 12-week, single-center, investigator-blinded, randomized, parallel-design study assessed the safety and efficacy of tretinoin microsphere gel 0.04% delivered by pump (TMG PUMP) to tazarotene cream 0.05% (TAZ) in mild-to-moderate facial acne vulgaris. Efficacy measurements included investigator global assessment (IGA), lesion counts, and subject self-assessment of acne signs and symptoms. Efficacy was generally comparable between treatment groups, although TMG PUMP provided more rapid results in several parameters. IGA showed a more rapid mean change from baseline at week 4 in the TMG PUMP group (-0.18 versus -0.05 in the TAZ subjects). TMG PUMP yielded more rapid improvement in papules. At week 4, the mean percentage change from baseline in open comedones was statistically significant at -64% in the TMG PUMP group (P=0.0039, within group) versus -19% in the TAZ group (not statistically significant within the group; P=0.1875). Skin dryness, peeling and pruritus were significantly less in the TMG PUMP group as early as week 4. Adverse events related to study treatment were rare in both groups and all resolved upon discontinuation of study medication.
The role of single immediate loading implant in long Class IV Kennedy mandibular partial denture.
Mohamed, Gehan F; El Sawy, Amal A
2012-10-01
The treatment of long-span Kennedy class IV considers a prosthodontic challenge. This study evaluated the integrity of principle abutments in long Kennedy class IV clinically and radiographically, when rehabilitated with conventional metallic partial denture as a control group and mandibular partial overdentures supported with single immediately loaded implant in symphyseal as a study group. Twelve male patients were divided randomly allotted into two equal groups. First group patients received removable metallic partial denture, whereas in the second group, patients received partial overdentures supported with single immediately loaded implant in symphyseal region. The partial dentures design in both groups was the same. Long-cone paralleling technique and transmission densitometer were used at the time of denture insertion, 3, 6, and 12 months. Gingival index, bone loss, and optical density were measured for principle abutments during the follow-up. A significant reduction in bone loss and density were detected in group II comparing with group I. Gingival index had no significant change (p-value < 0.05). A single symphyseal implant in long span class IV Kennedy can play a pivotal role to improve the integrity of the principle abutments and alveolar bone support. © 2010 Wiley Periodicals, Inc.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
A general purpose subroutine for fast fourier transform on a distributed memory parallel machine
NASA Technical Reports Server (NTRS)
Dubey, A.; Zubair, M.; Grosch, C. E.
1992-01-01
One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
Structural synthesis: Precursor and catalyst
NASA Technical Reports Server (NTRS)
Schmit, L. A.
1984-01-01
More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.
High convergence efficiency design of flat Fresnel lens with large aperture
NASA Astrophysics Data System (ADS)
Ke, Jieyao; Zhao, Changming; Guan, Zhe
2018-01-01
This paper designed a circle-shaped Fresnel lens with large aperture as part of the solar pumped laser design project. The Fresnel lens designed in this paper simulate in size 1000mm×1000mm, focus length 1200mm and polymethyl methacrylate (PMMA) material in order to conduct high convergence efficiency. In the light of design requirement of concentric ring with same width of 0.3mm, this paper proposed an optimized Fresnel lens design based on previous sphere design and conduct light tracing simulation in Matlab. This paper also analyzed the effect of light spot size, light intensity distribution, optical efficiency under four conditions, monochromatic parallel light, parallel spectrum light, divergent monochromatic light and sunlight. Design by 550nm wavelength and under the condition of Fresnel reflection, the results indicated that the designed lens could convergent sunlight in diffraction limit of 11.8mm with a 78.7% optical efficiency, better than the sphere cutting design results of 30.4%.
Shu, Deming; Kearney, Steven P.; Preissner, Curt A.
2015-02-17
A method and deformation compensated flexural pivots structured for precision linear nanopositioning stages are provided. A deformation-compensated flexural linear guiding mechanism includes a basic parallel mechanism including a U-shaped member and a pair of parallel bars linked to respective pairs of I-link bars and each of the I-bars coupled by a respective pair of flexural pivots. The basic parallel mechanism includes substantially evenly distributed flexural pivots minimizing center shift dynamic errors.
Software Tools for Design and Performance Evaluation of Intelligent Systems
2004-08-01
Self-calibration of Three-Legged Modular Reconfigurable Parallel Robots Based on Leg-End Distance Errors,” Robotica , Vol. 19, pp. 187-198. [4...9] Lintott, A. B., and Dunlop, G. R., “Parallel Topology Robot Calibration,” Robotica . [10] Vischer, P., and Clavel, R., “Kinematic Calibration...of the Parallel Delta Robot,” Robotica , Vol. 16, pp.207- 218, 1998. [11] Joshi, S.A., and Surianarayan, A., “Calibration of a 6-DOF Cable Robot Using
Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures
NASA Technical Reports Server (NTRS)
Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.
1998-01-01
In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.
Zheng, Shuai; Lal, Sara; Meier, Peter; Sibbritt, David; Zaslawski, Chris
2014-06-01
Stress is a major problem in today's fast-paced society and can lead to serious psychosomatic complications. The ancient Chinese mind-body exercise of Tai Chi may provide an alternative and self-sustaining option to pharmaceutical medication for stressed individuals to improve their coping mechanisms. The protocol of this study is designed to evaluate whether Tai Chi practice is equivalent to standard exercise and whether the Tai Chi group is superior to a wait-list control group in improving stress coping levels. This study is a 6-week, three-arm, parallel, randomized, clinical trial designed to evaluate Tai Chi practice against standard exercise and a Tai Chi group against a nonactive control group over a period of 6 weeks with a 6-week follow-up. A total of 72 healthy adult participants (aged 18-60 years) who are either Tai Chi naïve or have not practiced Tai Chi in the past 12 months will be randomized into a Tai Chi group (n = 24), an exercise group (n = 24) or a wait-list group (n = 24). The primary outcome measure will be the State Trait Anxiety Inventory with secondary outcome measures being the Perceived Stress Scale 14, heart rate variability, blood pressure, Short Form 36 and a visual analog scale. The protocol is reported using the appropriate Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) items. Copyright © 2014. Published by Elsevier B.V.
Yu, Ye-Feng; Dai, Jia-Ping; Sheng, Jian-Ming; Zhou, Xiao
2017-06-25
To compare clinical outcomes of perpendicular or parallel double plate in treating type C fractures of distal humerus in adults. From March 2009 and March 2013, 40 adult patients with type C distal humerus fractures were treated. The patients were divided into two groups according to fixed form. In perpendicular group(group A), there were 13 males and 9 females with a mean age of (37.56±9.24) years old(ranged 18 to 56);while in parallel plating group(group B), including 11 males and 7 females, with a mean age of (41.35±9.03) year old(ranged 20 to 53). All fractures were fresh and closed without blood vessels or nerve damaged. Incision length, operating time, blood loss, hospital stay, preoperative and postoperative radiological change, range of activity of elbow joint, Mayo score, flexor and extensor elbow strength, and postoperative complications were observed and compared. All incisions were healed well. One patient occurred myositis ossificans between two groups. Two patients in group A and 1 patient in group B occurred elbow joint stiffness. All fractures were obtained bone union. Group A were followed up from 20 to 36 months with an average of (25.2±7.1) months, while group B were followed up from 18 to 35 months with an average of(24.3±6.0) months. There were significant differences in blood loss and operative time, while there was no obvious meaning in incision length, hospital stay, muscle strength, fracture healing time, range of activity of elbow joint. Mayo score of group A was 82.27±10.43, 6 cases obtained excellent results, 12 good, 3 moderate and 1 poor;in group B was 81.94±12.02, 5 cases obtained excellent results, 9 good, 3 moderate and 1 poor;and there were no statistical significance between two groups. There was no significant differences in clinical effects between perpendicular and parallel double plate for adult patients with type C distal humerus fractures, while the operation should choose according to facture and proficiency of operator.
A parallel input composite transimpedance amplifier.
Kim, D J; Kim, C
2018-01-01
A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.
A parallel input composite transimpedance amplifier
NASA Astrophysics Data System (ADS)
Kim, D. J.; Kim, C.
2018-01-01
A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Naumann, M; Lowe, N J
2001-01-01
Objectives To evaluate the safety and efficacy of botulinum toxin type A in the treatment of bilateral primary axillary hyperhidrosis. Design Multicentre, randomised, parallel group, placebo controlled trial. Setting 17 dermatology and neurology clinics in Belgium, Germany, Switzerland, and the United Kingdom. Participants Patients aged 18-75 years with bilateral primary axillary hyperhidrosis sufficient to interfere with daily living. 465 were screened, 320 randomised, and 307 completed the study. Interventions Patients received either botulinum toxin type A (Botox) 50 U per axilla or placebo by 10-15 intradermal injections evenly distributed within the hyperhidrotic area of each axilla, defined by Minor's iodine starch test. Main outcome measures Percentage of responders (patients with ⩾50% reduction from baseline of spontaneous axillary sweat production) at four weeks, patients' global assessment of treatment satisfaction score, and adverse events. Results At four weeks, 94% (227) of the botulinum toxin type A group had responded compared with 36% (28) of the placebo group. By week 16, response rates were 82% (198) and 21% (16), respectively. The results for all other measures of efficacy were significantly better in the botulinum toxin group than the placebo group. Significantly higher patient satisfaction was reported in the botulinum toxin type A group than the placebo group (3.3 v 0.8, P<0.001 at 4 weeks). Adverse events were reported by only 27 patients (11%) in the botulinum toxin group and four (5%) in the placebo group (P>0.05). Conclusion Botulinum toxin type A is a safe and effective treatment for primary axillary hyperhidrosis and produces high levels of patient satisfaction. What is already known on this topicPrimary hyperhidrosis is a chronic disorder that can affect any part of the body, especially the axillas, palms, feet, and faceCurrent treatments are often ineffective, short acting, or poorly toleratedWhat this study addsBotulinum toxin type A was significantly better than placebo on all measures of sweatingPatient satisfaction was high and few adverse events were reportedEffects of treatment remained apparent at 16 weeks PMID:11557704
Evans, E Glyn V; Sigurgeirsson, Bárdur
1999-01-01
Objective To compare the efficacy and safety of continuous terbinafine with intermittent itraconazole in the treatment of toenail onychomycosis. Design Prospective, randomised, double blind, double dummy, multicentre, parallel group study lasting 72 weeks. Setting 35 centres in six European countries. Subjects 496 patients aged 18 to 75 years with a clinical and mycological diagnosis of dermatophyte onychomycosis of the toenail. Interventions Study patients were randomly divided into four parallel groups to receive either terbinafine 250 mg a day for 12 or 16 weeks (groups T12 and T16) or itraconazole 400 mg a day for 1 week in every 4 weeks for 12 or 16 weeks (groups I3 and I4). Main outcome measures Assessment of primary efficacy at week 72 was mycological cure, defined as negative results on microscopy and culture of samples from the target toenail. Results At week 72 the mycological cure rates were 75.7% (81/107) in the T12 group and 80.8% (80/99) in the T16 group compared with 38.3% (41/107) in the I3 group and 49.1 % (53/108) in the I4 group. All comparisons (T12 v I3, T12 v I4, T16 v I3, T16 v I4) showed significantly higher cure rates in the terbinafine groups (all P<0.0001). Also, all secondary clinical outcome measures were significantly in favour of terbinafine at week 72. There were no differences in the number or type of adverse events recorded in the terbinafine or itraconazole groups. Conclusion Continuous terbinafine is significantly more effective than intermittent itraconazole in the treatment of patients with toenail onychomycosis. Key messagesGiven a correct diagnosis, fungal nail disease (onychomycosis) is curableTerbinafine is an allylamine antifungal with a primarily fungicidal mode of actionContinuous terbinafine treatment over 12 or 16 weeks achieves higher rates of clinical and mycological cure than intermittent itraconazole given over the same periodsTerbinafine is safe and well tolerated over 12 or 16 weeks of continuous treatmentContinuous terbinafine should be the current treatment of choice for onychomycosis PMID:10205099
Lähteenmäki, Pekka; Haukkamaa, Maija; Puolakka, Jukka; Riikonen, Ulla; Sainio, Susanna; Suvisaari, Janne; Nilsson, Carl Gustaf
1998-01-01
Objectives: To assess whether the levonorgestrel intrauterine system could provide a conservative alternative to hysterectomy in the treatment of excessive uterine bleeding. Design: Open randomised multicentre study with two parallel groups: a levonorgestrel intrauterine system group and a control group. Setting: Gynaecology departments of three hospitals in Finland. Subjects: Fifty six women aged 33-49 years scheduled to undergo hysterectomy for treatment of excessive uterine bleeding. Interventions: Women were randomised either to continue with their current medical treatment or to have a levonorgestrel intrauterine system inserted. Main outcome measure: Proportion of women cancelling their decision to undergo hysterectomy. Results: At 6 months, 64.3% (95% confidence interval 44.1 to 81.4%) of the women in the levonorgestrel intrauterine system group and 14.3% (4.0 to 32.7%) in the control group had cancelled their decision to undergo hysterectomy (P<0.001). Conclusions: The use of the levonorgestrel intrauterine system is a good conservative alternative to hysterectomy in the treatment of menorrhagia and should be considered before hysterectomy or other invasive treatments. PMID:9552948
Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark
2010-08-01
In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech restructuring treatment. Post treatment, participants were randomly assigned to 2 trial arms: standard maintenance and standard maintenance plus VSM. Participants in the latter arm viewed stutter-free videos of themselves each day for 1 month. The addition of VSM did not improve speech outcomes, as measured by percent syllables stuttered, at either 1 or 6 months postrandomization. However, at the latter assessment, self-rating of worst stuttering severity by the VSM group was 10% better than that of the control group, and satisfaction with speech fluency was 20% better. Quality of life was also better for the VSM group, which was mildly to moderately impaired compared with moderate impairment in the control group. VSM intervention after treatment was associated with improvements in self-reported outcomes. The clinical implications of this finding are discussed.
Accuracy of impressions with different impression materials in angulated implants.
Reddy, S; Prasad, K; Vakil, H; Jain, A; Chowdhary, R
2013-01-01
To evaluate the dimensional accuracy of the resultant (duplicative) casts made from two different impression materials (polyvinyl siloxane and polyether) in parallel and angulated implants. Three definitive master casts (control groups) were fabricated in dental stone with three implants, placed at equi-distance. In first group (control), all three implants were placed parallel to each other and perpendicular to the plane of the cast. In the second and third group (control), all three implants were placed at 10° and 15 o angulation respectively to the long axis of the cast, tilting towards the centre. Impressions were made with polyvinyl siloxane and polyether impression materials in a special tray, using a open tray impression technique from the master casts. These impressions were poured to obtain test casts. Three reference distances were evaluated on each test cast by using a profile projector and compared with control groups to determine the effect of combined interaction of implant angulation and impression materials on the accuracy of implant resultant cast. Statistical analysis revealed no significant difference in dimensional accuracy of the resultant casts made from two different impression materials (polyvinyl siloxane and polyether) by closed tray impression technique in parallel and angulated implants. On the basis of the results of this study, the use of both the impression materials i.e., polyether and polyvinyl siloxane impression is recommended for impression making in parallel as well as angulated implants.
Pang, Yong; Yu, Baiying; Vigneron, Daniel B; Zhang, Xiaoliang
2014-02-01
Quadrature coils are often desired in MR applications because they can improve MR sensitivity and also reduce excitation power. In this work, we propose, for the first time, a quadrature array design strategy for parallel transmission at 298 MHz using single-feed circularly polarized (CP) patch antenna technique. Each array element is a nearly square ring microstrip antenna and is fed at a point on the diagonal of the antenna to generate quadrature magnetic fields. Compared with conventional quadrature coils, the single-feed structure is much simple and compact, making the quadrature coil array design practical. Numerical simulations demonstrate that the decoupling between elements is better than -35 dB for all the elements and the RF fields are homogeneous with deep penetration and quadrature behavior in the area of interest. Bloch equation simulation is also performed to simulate the excitation procedure by using an 8-element quadrature planar patch array to demonstrate its feasibility in parallel transmission at the ultrahigh field of 7 Tesla.
Branson: A Mini-App for Studying Parallel IMC, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Alex
This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
Parallel Optical Random Access Memory (PORAM)
NASA Technical Reports Server (NTRS)
Alphonse, G. A.
1989-01-01
It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.
Predictive design and interpretation of colliding pulse injected laser wakefield experiments
NASA Astrophysics Data System (ADS)
Cormier-Michel, Estelle; Ranjbar, Vahid H.; Cowan, Ben M.; Bruhwiler, David L.; Geddes, Cameron G. R.; Chen, Min; Ribera, Benjamin; Esarey, Eric; Schroeder, Carl B.; Leemans, Wim P.
2010-11-01
The use of colliding laser pulses to control the injection of plasma electrons into the plasma wake of a laser plasma accelerator is a promising approach to obtaining stable, tunable electron bunches with reduced emittance and energy spread. Colliding Pulse Injection (CPI) experiments are being performed by groups around the world. We will present recent particle-in-cell simulations, using the parallel VORPAL framework, of CPI for physical parameters relevant to ongoing experiments of the LOASIS program at LBNL. We evaluate the effect of laser and plasma tuning, on the trapped electron bunch and perform parameter scans in order to optimize the quality of the bunch. Impact of non-ideal effects such as imperfect laser modes and laser self focusing are also evaluated. Simulation data are validated against current experimental results, and are used to design future experiments.
Katakami, Naoto; Mita, Tomoya; Yoshii, Hidenori; Shiraiwa, Toshihiko; Yasuda, Tetsuyuki; Okada, Yosuke; Umayahara, Yutaka; Kaneto, Hideaki; Osonoi, Takeshi; Yamamoto, Tsunehiko; Kuribayashi, Nobuichi; Maeda, Kazuhisa; Yokoyama, Hiroki; Kosugi, Keisuke; Ohtoshi, Kentaro; Hayashi, Isao; Sumitani, Satoru; Tsugawa, Mamiko; Ohashi, Makoto; Taki, Hideki; Nakamura, Tadashi; Kawashima, Satoshi; Sato, Yasunori; Watada, Hirotaka; Shimomura, Iichiro
2017-10-01
Sodium-glucose co-transporter-2 (SGLT2) inhibitors are anti-diabetic agents that improve glycemic control with a low risk of hypoglycemia and ameliorate a variety of cardiovascular risk factors. The aim of the ongoing study described herein is to investigate the preventive effects of tofogliflozin, a potent and selective SGLT2 inhibitor, on the progression of atherosclerosis in subjects with type 2 diabetes (T2DM) using carotid intima-media thickness (IMT), an established marker of cardiovascular disease (CVD), as a marker. The Study of Using Tofogliflozin for Possible better Intervention against Atherosclerosis for type 2 diabetes patients (UTOPIA) trial is a prospective, randomized, open-label, blinded-endpoint, multicenter, and parallel-group comparative study. The aim was to recruit a total of 340 subjects with T2DM but no history of apparent CVD at 24 clinical sites and randomly allocate these to a tofogliflozin treatment group or a conventional treatment group using drugs other than SGLT2 inhibitors. As primary outcomes, changes in mean and maximum IMT of the common carotid artery during a 104-week treatment period will be measured by carotid echography. Secondary outcomes include changes in glycemic control, parameters related to β-cell function and diabetic nephropathy, the occurrence of CVD and adverse events, and biochemical measurements reflecting vascular function. This is the first study to address the effects of SGLT2 inhibitors on the progression of carotid IMT in subjects with T2DM without a history of CVD. The results will be available in the very near future, and these findings are expected to provide clinical data that will be helpful in the prevention of diabetic atherosclerosis and subsequent CVD. Kowa Co., Ltd. UMIN000017607.
Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana
2016-10-26
It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (-500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO 2max ) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η ² = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η ² = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η ² = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η ² = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program.
Khosravi, Adnan; Esfahani-Monfared, Zahra; Seifi, Sharareh; Khodadad, Kian
2017-01-01
Maintenance strategy has been used to improve survival in non-small cell lung cancer (NSCLC). We investigated whether switch maintenance therapy with vinorelbine improved progression free survival (PFS) after first-line chemotherapy with gemcitabine plus carboplatin. In this single blind, parallel, phase 2, randomized trial, patients with NSCLC pathology, age >18 years, Eastern Cooperative Oncology Group (ECOG) performance status (PS) score of 0-2, and advanced stage (IIIB and IV) were treated with up to 6 cycles of gemcitabine 1250 mg/m 2 (day 1 and 8) plus carboplatin AUC 5 (day 1) every 3 weeks. Patients who did not show progression after first-line chemotherapy were randomly assigned to receive switch maintenance with vinorelbine (25 mg/m 2 , day 1, 15) or the best supportive care until disease progression. A total of 100 patients were registered, of whom 34 had a non-progressive response to first-line chemotherapy and randomly received maintenance vinorelbine (n=19) or best supportive care (n=15). The hazard ratio of PFS in the vinorelbine group relative to the best supportive care group was 1.097 (95% confidence interval = 0.479-2.510; P-value =0.827). There was no significant difference between the overall survival for the two groups (P=0.068). Switch maintenance strategies are beneficial, but defining the right candidates for treatment is a problem. Moreover, the trial designs do not always reflect the real-world considerations. Switch maintenance therapy with vinorelbine, though had tolerable toxicity, did not improve PFS in patients with NSCLC. Therefore, other agents should be considered in this setting.
Backward spoof surface wave in plasmonic metamaterial of ultrathin metallic structure.
Liu, Xiaoyong; Feng, Yijun; Zhu, Bo; Zhao, Junming; Jiang, Tian
2016-02-04
Backward wave with anti-parallel phase and group velocities is one of the basic properties associated with negative refraction and sub-diffraction image that have attracted considerable interest in the context of photonic metamaterials. It has been predicted theoretically that some plasmonic structures can also support backward wave propagation of surface plasmon polaritons (SPPs), however direct experimental demonstration has not been reported, to the best of our knowledge. In this paper, a specially designed plasmonic metamaterial of corrugated metallic strip has been proposed that can support backward spoof SPP wave propagation. The dispersion analysis, the full electromagnetic field simulation and the transmission measurement of the plasmonic metamaterial waveguide have clearly validated the backward wave propagation with dispersion relation possessing negative slope and opposite directions of group and phase velocities. As a further verification and application, a contra-directional coupler is designed and tested that can route the microwave signal to opposite terminals at different operating frequencies, indicating new application opportunities of plasmonic metamaterial in integrated functional devices and circuits for microwave and terahertz radiation.
Su, Xiaoshi; Norris, Andrew N
2016-06-01
Gradient index (GRIN), refractive, and asymmetric transmission devices for elastic waves are designed using a solid with aligned parallel gaps. The gaps are assumed to be thin so that they can be considered as parallel cracks separating elastic plate waveguides. The plates do not interact with one another directly, only at their ends where they connect to the exterior solid. To formulate the transmission and reflection coefficients for SV- and P-waves, an analytical model is established using thin plate theory that couples the waveguide modes with the waves in the exterior body. The GRIN lens is designed by varying the thickness of the plates to achieve different flexural wave speeds. The refractive effect of SV-waves is achieved by designing the slope of the edge of the plate array, and keeping the ratio between plate length and flexural wavelength fixed. The asymmetric transmission of P-waves is achieved by sending an incident P-wave at a critical angle, at which total conversion to SV-wave occurs. An array of parallel gaps perpendicular to the propagation direction of the reflected waves stop the SV-wave but let P-waves travel through. Examples of focusing, steering, and asymmetric transmission devices are discussed.
Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.
Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias
2011-01-01
The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.
ERIC Educational Resources Information Center
Raman, Madhavi Gayathri; Vijaya
2016-01-01
This paper captures the design of a comprehensive curriculum incorporating the four skills based exclusively on the use of parallel audio-visual and written texts. We discuss the use of authentic materials to teach English to Indian undergraduates aged 18 to 20 years. Specifically, we talk about the use of parallel reading (screen-play) and…
Introduction to Computers: Parallel Alternative Strategies for Students. Course No. 0200000.
ERIC Educational Resources Information Center
Chauvenne, Sherry; And Others
Parallel Alternative Strategies for Students (PASS) is a content-centered package of alternative methods and materials designed to assist secondary teachers to meet the needs of mainstreamed learning-disabled and emotionally-handicapped students of various achievement levels in the basic education content courses. This supplementary text and…
Issues of planning trajectory of parallel robots taking into account zones of singularity
NASA Astrophysics Data System (ADS)
Rybak, L. A.; Khalapyan, S. Y.; Gaponenko, E. V.
2018-03-01
A method for determining the design characteristics of a parallel robot necessary to provide specified parameters of its working space that satisfy the controllability requirement is developed. The experimental verification of the proposed method was carried out using an approximate planar 3-RPR mechanism.
Link failure detection in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.
2010-11-09
Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
A mirror for lab-based quasi-monochromatic parallel x-rays
NASA Astrophysics Data System (ADS)
Nguyen, Thanhhai; Lu, Xun; Lee, Chang Jun; Jung, Jin-Ho; Jin, Gye-Hwan; Kim, Sung Youb; Jeon, Insu
2014-09-01
A multilayered parabolic mirror with six W/Al bilayers was designed and fabricated to generate monochromatic parallel x-rays using a lab-based x-ray source. Using this mirror, curved bright bands were obtained in x-ray images as reflected x-rays. The parallelism of the reflected x-rays was investigated using the shape of the bands. The intensity and monochromatic characteristics of the reflected x-rays were evaluated through measurements of the x-ray spectra in the band. High intensity, nearly monochromatic, and parallel x-rays, which can be used for high resolution x-ray microscopes and local radiation therapy systems, were obtained.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie
2014-01-01
It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.
NASA Astrophysics Data System (ADS)
Shi, Wei; Hu, Xiaosong; Jin, Chao; Jiang, Jiuchun; Zhang, Yanru; Yip, Tony
2016-05-01
With the development and popularization of electric vehicles, it is urgent and necessary to develop effective management and diagnosis technology for battery systems. In this work, we design a parallel battery model, according to equivalent circuits of parallel voltage and branch current, to study effects of imbalanced currents on parallel large-format LiFePO4/graphite battery systems. Taking a 60 Ah LiFePO4/graphite battery system manufactured by ATL (Amperex Technology Limited, China) as an example, causes of imbalanced currents in the parallel connection are analyzed using our model, and the associated effect mechanisms on long-term stability of each single battery are examined. Theoretical and experimental results show that continuously increasing imbalanced currents during cycling are mainly responsible for the capacity fade of LiFePO4/graphite parallel batteries. It is thus a good way to avoid fast performance fade of parallel battery systems by suppressing variations of branch currents.
Halaska, M; Raus, K; Bĕles, P; Martan, A; Paithner, K G
1998-10-01
The aim of study presented here was to gather the data about the tolerability and efficacy of Vitex agnus castus (VACS) extract. The study was designed as double-blind, placebo controlled in two parallel groups (each 50 patients). Treatment phase lasted 3 consequent menstrual cycles (2 x 30 drops/day = 1.8 ml of VASC) or placebo. Mastalgia during at least 5 days of the cycle before the treatment was the strict inclusion condition. For assessment of the efficacy visual analogue scale was used. Altogether 97 patients were included into the statistical analysis (VACS: n = 48, placebo: n = 49). Intensity of breast pain diminished quicker with VACS group. The tolerability was satisfactory. We found VACS to be useful in the treatment of cyclical breast pain in women.
Strain energy release rate, interlaminar stresses, and 3-D transformation of stiffnesses
NASA Technical Reports Server (NTRS)
1988-01-01
In this analysis, a delamination between the belt and core sections is assumed to grow parallel to the belt direction in the tapered and uniform sections. These delaminations in each section are denoted by a and b respectively. The core section in the taper portion is modeled by two equivalent sublaminates. The stiffness properties are smeared to obtain effective cracked and uncracked stiffnesses which are designated A (u) and A (c). These stiffnesses change from one ply drop group to another with crack growth a by experiencing a sudden change at discrete locations. Therefore, A (u) and A (c) can be represented in three consecutive regions.
A Cloud Based Real-Time Collaborative Platform for eHealth.
Ionescu, Bogdan; Gadea, Cristian; Solomon, Bogdan; Ionescu, Dan; Stoicu-Tivadar, Vasile; Trifan, Mircea
2015-01-01
For more than a decade, the eHealth initiative has been a government concern of many countries. In an Electronic Health Record (EHR) System, there is a need for sharing the data with a group of specialists simultaneously. Collaborative platforms alone are just a part of a solution, while a collaborative platform with parallel editing capabilities and with synchronized data streaming are stringently needed. In this paper, the design and implementation of a collaborative platform used in healthcare is introduced by describing the high level architecture and its implementation. A series of eHealth services are identified and usage examples in a healthcare environment are given.
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Qin, Shao-gang; Song, Chun-yan; Jiang, Yun-hong
2014-09-01
With the development of science and technology, photoelectric equipment comprises visible system, infrared system, laser system and so on, integration, information and complication are higher than past. Parallelism and jumpiness of optical axis are important performance of photoelectric equipment,directly affect aim, ranging, orientation and so on. Jumpiness of optical axis directly affect hit precision of accurate point damage weapon, but we lack the facility which is used for testing this performance. In this paper, test system which is used fo testing parallelism and jumpiness of optical axis is devised, accurate aim isn't necessary and data processing are digital in the course of testing parallelism, it can finish directly testing parallelism of multi-axes, aim axis and laser emission axis, parallelism of laser emission axis and laser receiving axis and first acuualizes jumpiness of optical axis of optical sighting device, it's a universal test system.
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results
NASA Technical Reports Server (NTRS)
Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)
1994-01-01
In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
A 48Cycles/MB H.264/AVC Deblocking Filter Architecture for Ultra High Definition Applications
NASA Astrophysics Data System (ADS)
Zhou, Dajiang; Zhou, Jinjia; Zhu, Jiayi; Goto, Satoshi
In this paper, a highly parallel deblocking filter architecture for H.264/AVC is proposed to process one macroblock in 48 clock cycles and give real-time support to QFHD@60fps sequences at less than 100MHz. 4 edge filters organized in 2 groups for simultaneously processing vertical and horizontal edges are applied in this architecture to enhance its throughput. While parallelism increases, pipeline hazards arise owing to the latency of edge filters and data dependency of deblocking algorithm. To solve this problem, a zig-zag processing schedule is proposed to eliminate the pipeline bubbles. Data path of the architecture is then derived according to the processing schedule and optimized through data flow merging, so as to minimize the cost of logic and internal buffer. Meanwhile, the architecture's data input rate is designed to be identical to its throughput, while the transmission order of input data can also match the zig-zag processing schedule. Therefore no intercommunication buffer is required between the deblocking filter and its previous component for speed matching or data reordering. As a result, only one 24×64 two-port SRAM as internal buffer is required in this design. When synthesized with SMIC 130nm process, the architecture costs a gate count of 30.2k, which is competitive considering its high performance.
Wiegersma, Marian; Panman, Chantal M C R; Kollen, Boudewijn J; Vermeulen, Karin M; Schram, Aaltje J; Messelink, Embert J; Berger, Marjolein Y; Lisman-Van Leeuwen, Yvonne; Dekker, Janny H
2014-02-01
Pelvic floor muscle training (PFMT) and pessaries are commonly used in the conservative treatment of pelvic organ prolapse (POP). Because there is a lack of evidence regarding the optimal choice between these two interventions, we designed the "Pelvic Organ prolapse in primary care: effects of Pelvic floor muscle training and Pessary treatment Study" (POPPS). POPPS consists of two parallel open label randomized controlled trials performed in primary care, in women aged ≥55 years, recruited through a postal questionnaire. In POPPS trial 1, women with mild POP receive either PFMT or watchful waiting. In POPPS trial 2, women with advanced POP receive either PFMT or pessary treatment. Patient recruitment started in 2009 and was finished in December 2012. Primary outcome of both POPPS trials is improvement in POP-related symptoms. Secondary outcomes are quality of life, sexual function, POP-Q stage, pelvic floor muscle function, post-void residual volume, patients' perception of improvement, and costs. All outcomes are measured 3, 12, and 24 months after the start of treatment. Cost-effectiveness will be calculated based on societal costs, using the PFDI-20 and the EQ-5D as outcomes. In this paper the POPPS design, the encountered challenges and our solutions, and participant baseline characteristics are presented. For both trials the target numbers of patients in each treatment group are achieved, giving this study sufficient power to lead to promising results. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Modelling and simulation of parallel triangular triple quantum dots (TTQD) by using SIMON 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fathany, Maulana Yusuf, E-mail: myfathany@gmail.com; Fuada, Syifaul, E-mail: fsyifaul@gmail.com; Lawu, Braham Lawas, E-mail: bram-labs@rocketmail.com
2016-04-19
This research presents analysis of modeling on Parallel Triple Quantum Dots (TQD) by using SIMON (SIMulation Of Nano-structures). Single Electron Transistor (SET) is used as the basic concept of modeling. We design the structure of Parallel TQD by metal material with triangular geometry model, it is called by Triangular Triple Quantum Dots (TTQD). We simulate it with several scenarios using different parameters; such as different value of capacitance, various gate voltage, and different thermal condition.
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
Supersonic civil airplane study and design: Performance and sonic boom
NASA Technical Reports Server (NTRS)
Cheung, Samson
1995-01-01
Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.
Pfammatter, Angela; Spring, Bonnie; Saligram, Nalini; Davé, Raj; Gowda, Arun; Blais, Linelle; Arora, Monika; Ranjani, Harish; Ganda, Om; Hedeker, Donald; Reddy, Sethu; Ramalingam, Sandhya
2016-08-05
In low/middle income countries like India, diabetes is prevalent and health care access limited. Most adults have a mobile phone, creating potential for mHealth interventions to improve public health. To examine the feasibility and initial evidence of effectiveness of mDiabetes, a text messaging program to improve diabetes risk behaviors, a global nonprofit organization (Arogya World) implemented mDiabetes among one million Indian adults. A prospective, parallel cohort design was applied to examine whether mDiabetes improved fruit, vegetable, and fat intakes and exercise. Intervention participants were randomly selected from the one million Nokia subscribers who elected to opt in to mDiabetes. Control group participants were randomly selected from non-Nokia mobile phone subscribers. mDiabetes participants received 56 text messages in their choice of 12 languages over 6 months; control participants received no contact. Messages were designed to motivate improvement in diabetes risk behaviors and increase awareness about the causes and complications of diabetes. Participant health behaviors (exercise and fruit, vegetable, and fat intake) were assessed between 2012 and 2013 via telephone surveys by blinded assessors at baseline and 6 months later. Data were cleaned and analyzed in 2014 and 2015. 982 participants in the intervention group and 943 in the control group consented to take the phone survey at baselne. At the end of the 6-month period, 611 (62.22%) in the intervention and 632 (67.02%) in the control group completed the follow-up telephone survey. Participants receiving texts demonstrated greater improvement in a health behavior composite score over 6 months, compared with those who received no messages F(1, 1238) = 30.181, P<.001, 95% CI, 0.251-0.531. Fewer intervention participants demonstrated health behavior decline compared with controls. Improved fruit, vegetable, and fat consumption (P<.01) but not exercise were observed in those receiving messages, as compared with controls. A text messaging intervention was feasible and showed initial evidence of effectiveness in improving diabetes-related health behaviors, demonstrating the potential to facilitate population-level behavior change in a low/middle income country. Australian New Zealand Clinical Trials Registry (ACTRN): 12615000423516; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=367946&isReview=true (Archived by WebCite at http://www.webcitation.org/6j5ptaJgF).
Continuation Electroconvulsive Therapy vs Pharmacotherapy for Relapse Prevention in Major Depression
Kellner, Charles H.; Knapp, Rebecca G.; Petrides, Georgios; Rummans, Teresa A.; Husain, Mustafa M.; Rasmussen, Keith; Mueller, Martina; Bernstein, Hilary J.; O’Connor, Kevin; Smith, Glenn; Biggs, Melanie; Bailine, Samuel H.; Malur, Chitra; Yim, Eunsil; McClintock, Shawn; Sampson, Shirlene; Fink, Max
2013-01-01
Background Although electroconvulsive therapy (ECT) has been shown to be extremely effective for the acute treatment of major depression, it has never been systematically assessed as a strategy for relapse prevention. Objective To evaluate the comparative efficacy of continuation ECT (C-ECT) and the combination of lithium carbonate plus nortriptyline hydrochloride (C-Pharm) in the prevention of depressive relapse. Design Multisite, randomized, parallel design, 6-month trial performed from 1997 to 2004. Setting Five academic medical centers and their outpatient psychiatry clinics. Patients Two hundred one patients with Structured Clinical Interview for DSM-IV–diagnosed unipolar depression who had remitted with a course of bilateral ECT. Interventions Random assignment to 2 treatment groups receiving either C-ECT (10 treatments) or C-Pharm for 6 months. Main Outcome Measure Relapse of depression, compared between the C-ECT and C-Pharm groups. Results In the C-ECT group, 37.1% experienced disease relapse, 46.1% continued to have disease remission at the study end, and 16.8% dropped out of the study. In the C-Pharm group, 31.6% experienced disease relapse, 46.3% continued to have disease remission, and 22.1% dropped out of the study. Both Kaplan-Meier and Cox proportional hazards regression analyses indicated no statistically significant differences in overall survival curves and time to relapse for the groups. Mean±SD time to relapse for the C-ECT group was 9.1±7.0 weeks compared with 6.7±4.6 weeks for the C-Pharm group (P=.13). Both groups had relapse proportions significantly lower than a historical placebo control from a similarly designed study. Conclusions Both C-ECT and C-Pharm were shown to be superior to a historical placebo control, but both had limited efficacy, with more than half of patients either experiencing disease relapse or dropping out of the study. Even more effective strategies for relapse prevention in mood disorders are urgently needed. PMID:17146008
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
The Electronic Spectra of Phthalocyanine Radical Anions and Cations.
1985-03-01
415. (10) McHugh , A.J., Gouterman, M., Weiss, J., Theoret. Chim. Acta, 1972, 24, 346. (11) Henriksson, A. and Sundbom, M., Theoret. Chim. Acta, 1972, 27...oxidisable. The spectra for the nickel (II) Pc species exactly parallel the main group data since no change in oxidation state of the nickel ion is expect- ed18...and no charge transfer is expected in the region under study. 6 In summary, the main group complexes, nickel and cobalt species all have parallel
A Parallel Saturation Algorithm on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores
NASA Astrophysics Data System (ADS)
Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei
We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.
Mahmood, Zohaib; McDaniel, Patrick; Guérin, Bastien; Keil, Boris; Vester, Markus; Adalsteinsson, Elfar; Wald, Lawrence L; Daniel, Luca
2016-07-01
In a coupled parallel transmit (pTx) array, the power delivered to a channel is partially distributed to other channels because of coupling. This power is dissipated in circulators resulting in a significant reduction in power efficiency. In this study, a technique for designing robust decoupling matrices interfaced between the RF amplifiers and the coils is proposed. The decoupling matrices ensure that most forward power is delivered to the load without loss of encoding capabilities of the pTx array. The decoupling condition requires that the impedance matrix seen by the power amplifiers is a diagonal matrix whose entries match the characteristic impedance of the power amplifiers. In this work, the impedance matrix of the coupled coils is diagonalized by a successive multiplication by its eigenvectors. A general design procedure and software are developed to generate automatically the hardware that implements diagonalization using passive components. The general design method is demonstrated by decoupling two example parallel transmit arrays. Our decoupling matrices achieve better than -20 db decoupling in both cases. A robust framework for designing decoupling matrices for pTx arrays is presented and validated. The proposed decoupling strategy theoretically scales to any arbitrary number of channels. Magn Reson Med 76:329-339, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Conchouso, David; McKerricher, Garret; Arevalo, Arpys; Castro, David; Shamim, Atif; Foulds, Ian G
2016-08-16
Scaled-up production of microfluidic droplets, through the parallelization of hundreds of droplet generators, has received a lot of attention to bring novel multiphase microfluidics research to industrial applications. However, apart from droplet generation, other significant challenges relevant to this goal have never been discussed. Examples include monitoring systems, high-throughput processing of droplets and quality control procedures among others. In this paper, we present and compare capacitive and radio frequency (RF) resonator sensors as two candidates that can measure the dielectric properties of emulsions in microfluidic channels. By placing several of these sensors in a parallelization device, the stability of the droplet generation at different locations can be compared, and potential malfunctions can be detected. This strategy enables for the first time the monitoring of scaled-up microfluidic droplet production. Both sensors were prototyped and characterized using emulsions with droplets of 100-150 μm in diameter, which were generated in parallelization devices at water-in-oil volume fractions (φ) between 11.1% and 33.3%.Using these sensors, we were able to measure accurately increments as small as 2.4% in the water volume fraction of the emulsions. Although both methods rely on the dielectric properties of the emulsions, the main advantage of the RF resonator sensors is the fact that they can be designed to resonate at multiple frequencies of the broadband transmission line. Consequently with careful design, two or more sensors can be parallelized and read out by a single signal. Finally, a comparison between these sensors based on their sensitivity, readout cost and simplicity, and design flexibility is also discussed.
PREFACE: Second Meeting of the APS Topical Group on Hadronic Physics
NASA Astrophysics Data System (ADS)
Ernst, David; de Jager, Kees; Roberts, Craig; Sheldon, Paul; Swanson, Eric
2007-06-01
The Second Meeting of the APS Topical Group on Hadronic Physics was held on 22-24 October 2006 at the Opryland Resort in Nashville, Tennessee. Keeping with tradition, the meeting was held in conjunction with the Fall meeting of the APS Division of Nuclear Physics. Approximately 90 physicists participated in the meeting, presenting 25 talks in seven plenary sessions and 48 talks in 11 parallel sessions. These sessions covered a wide range of topics related to strongly interacting matter. Among these were charm spectroscopy, gluonic exotics, nucleon resonance physics, RHIC physics, electroweak and spin physics, lattice QCD initiatives, and new facilities. Brad Tippens and Brad Keister provided perspective from the funding agencies. The organisers are extremely grateful to the following institutions for financial and logistical support: the American Physical Society, Jefferson Lab, Brookhaven National Laboratory, and Vanderbilt University. We thank the following persons for assisting in organising the parallel sessions: Ted Barnes, Jian-Ping Chen, Ed Kinney, Krishna Kumar, Harry Lee, Mike Leitch, Kam Seth, and Dennis Weygand. We also thank Gerald Ragghianti for designing the conference poster, Will Johns for managing the audio-visual equipment and for placing the talks on the web, Sandy Childress for administrative expertise, and Vanderbilt graduate students Eduardo Luiggi and Jesus Escamillad for their assistance. David Ernst, Kees de Jager, Craig Roberts (Chair), Paul Sheldon and Eric Swanson Editors
Haziza, Christelle; de La Bourdonnaye, Guillaume; Skiada, Dimitra; Ancerewicz, Jacek; Baker, Gizelle; Picavet, Patrick; Lüdicke, Frank
2016-11-30
The Tobacco Heating System (THS) 2.2, a candidate Modified Risk Tobacco Product (MRTP), is designed to heat tobacco without burning it. Tobacco is heated in order to reduce the formation of harmful and potentially harmful constituents (HPHC), and reduce the consequent exposure, compared with combustible cigarettes (CC). In this 5-day exposure, controlled, parallel-group, open-label clinical study, 160 smoking, healthy subjects were randomized to three groups and asked to: (1) switch from CCs to THS 2.2 (THS group; 80 participants); (2) continue to use their own non-menthol CC brand (CC group; 41 participants); or (3) to refrain from smoking (smoking abstinence (SA) group; 39 participants). Biomarkers of exposure, except those associated with nicotine exposure, were significantly reduced in the THS group compared with the CC group, and approached the levels observed in the SA group. Increased product consumption and total puff volume were reported in the THS group. However, exposure to nicotine was similar to CC at the end of the confinement period. Reduction in urge-to-smoke was comparable between the THS and CC groups and THS 2.2 product was well tolerated. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Batch, Bryan C; Tyson, Crystal; Bagwell, Jacqueline; Corsino, Leonor; Intille, Stephen; Lin, Pao-Hwa; Lazenka, Tony; Bennett, Gary; Bosworth, Hayden B; Voils, Corrine; Grambow, Steven; Sutton, Aziza; Bordogna, Rachel; Pangborn, Matthew; Schwager, Jenifer; Pilewski, Kate; Caccia, Carla; Burroughs, Jasmine; Svetkey, Laura P
2014-03-01
The obesity epidemic has spread to young adults, leading to significant public health implications later in adulthood. Intervention in early adulthood may be an effective public health strategy for reducing the long-term health impact of the epidemic. Few weight loss trials have been conducted in young adults. It is unclear what weight loss strategies are beneficial in this population. To describe the design and rationale of the NHLBI-sponsored Cell Phone Intervention for You (CITY) study, which is a single center, randomized three-arm trial that compares the impact on weight loss of 1) a behavioral intervention that is delivered almost entirely via cell phone technology (Cell Phone group); and 2) a behavioral intervention delivered mainly through monthly personal coaching calls enhanced by self-monitoring via cell phone (Personal Coaching group), each compared to 3) a usual care, advice-only control condition. A total of 365 community-dwelling overweight/obese adults aged 18-35 years were randomized to receive one of these three interventions for 24 months in parallel group design. Study personnel assessing outcomes were blinded to group assignment. The primary outcome is weight change at 24 [corrected] months. We hypothesize that each active intervention will cause more weight loss than the usual care condition. Study completion is anticipated in 2014. If effective, implementation of the CITY interventions could mitigate the alarming rates of obesity in young adults through promotion of weight loss. ClinicalTrial.gov: NCT01092364. Published by Elsevier Inc.
Batch, Bryan C.; Tyson, Crystal; Bagwell, Jacqueline; Corsino, Leonor; Intille, Stephen; Lin, Pao-Hwa; Lazenka, Tony; Bennett, Gary; Bosworth, Hayden B.; Voils, Corrine; Grambow, Steven; Sutton, Aziza; Bordogna, Rachel; Pangborn, Matthew; Schwager, Jenifer; Pilewski, Kate; Caccia, Carla; Burroughs, Jasmine; Svetkey, Laura P.
2014-01-01
Background The obesity epidemic has spread to young adults, leading to significant public health implications later in adulthood. Intervention in early adulthood may be an effective public health strategy for reducing the long-term health impact of the epidemic. Few weight loss trials have been conducted in young adults. It is unclear what weight loss strategies are beneficial in this population. Purpose To describe the design and rationale of the NHLBI-sponsored Cell Phone Intervention for You (CITY) study, which is a single center, randomized three-arm trial that compares the impact on weight loss of 1) a behavioral intervention that is delivered almost entirely via cell phone technology (Cell Phone group); and 2) a behavioral intervention delivered mainly through monthly personal coaching calls enhanced by self-monitoring via cell phone (Personal Coaching group), each compared to; 3) a usual care, advice-only control condition. Methods A total of 365 community-dwelling overweight/obese adults aged 18–35 years were randomized to receive one of these three interventions for 24 months in parallel group design. Study personnel assessing outcomes were blinded to group assignment. The primary outcome is weight change at 12 months. We hypothesize that each active intervention will cause more weight loss than the usual care condition. Study completion is anticipated in 2014. Conclusions If effective, implementation of the CITY interventions could mitigate the alarming rates of obesity in young adults through promotion of weight loss. PMID:24462568
Varley, Rosemary; Cowell, Patricia E; Dyson, Lucy; Inglis, Lesley; Roper, Abigail; Whiteside, Sandra P
2016-03-01
There is currently little evidence on effective interventions for poststroke apraxia of speech. We report outcomes of a trial of self-administered computer therapy for apraxia of speech. Effects of speech intervention on naming and repetition of treated and untreated words were compared with those of a visuospatial sham program. The study used a parallel-group, 2-period, crossover design, with participants receiving 2 interventions. Fifty participants with chronic and stable apraxia of speech were randomly allocated to 1 of 2 order conditions: speech-first condition versus sham-first condition. Period 1 design was equivalent to a randomized controlled trial. We report results for this period and profile the effect of the period 2 crossover. Period 1 results revealed significant improvement in naming and repetition only in the speech-first group. The sham-first group displayed improvement in speech production after speech intervention in period 2. Significant improvement of treated words was found in both naming and repetition, with little generalization to structurally similar and dissimilar untreated words. Speech gains were largely maintained after withdrawal of intervention. There was a significant relationship between treatment dose and response. However, average self-administered dose was modest for both groups. Future software design would benefit from incorporation of social and gaming components to boost motivation. Single-word production can be improved in chronic apraxia of speech with behavioral intervention. Self-administered computerized therapy is a promising method for delivering high-intensity speech/language rehabilitation. URL: http://orcid.org/0000-0002-1278-0601. Unique identifier: ISRCTN88245643. © 2016 American Heart Association, Inc.
Ren, Chong; McGrath, Colman; Yang, Yanqi
2015-09-01
To assess the effectiveness of diode low-level laser therapy (LLLT) for orthodontic pain control, a systematic and extensive electronic search for randomised controlled trials (RCTs) investigating the effects of diode LLLT on orthodontic pain prior to November 2014 was performed using the Cochrane Library (Issue 9, 2014), PubMed (1997), EMBASE (1947) and Web of Science (1956). The Cochrane tool for risk of bias evaluation was used to assess the bias risk in the chosen data. A meta-analysis was conducted using RevMan 5.3. Of the 186 results, 14 RCTs, with a total of 659 participants from 11 countries, were included. Except for three studies assessed as having a 'moderate risk of bias', the RCTs were rated as having a 'high risk of bias'. The methodological weaknesses were mainly due to 'blinding' and 'allocation concealment'. The meta-analysis showed that diode LLLT significantly reduced orthodontic pain by 39 % in comparison with placebo groups (P = 0.02). Diode LLLT was shown to significantly reduce the maximum pain intensity among parallel-design studies (P = 0.003 versus placebo groups; P = 0.000 versus control groups). However, no significant effects were shown for split-mouth-design studies (P = 0.38 versus placebo groups). It was concluded that the use of diode LLLT for orthodontic pain appears promising. However, due to methodological weaknesses, there was insufficient evidence to support or refute LLLT's effectiveness. RCTs with better designs and appropriate sample power are required to provide stronger evidence for diode LLLT's clinical applications.
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Li, Bin; Man, Ying; Bai, Li-Ping; Ji, Hai-Ying; Shi, Xue-Geng; Cui, Dong-Liang
2013-01-01
In order to find new herbicidally active compounds, a fifteen-member library, focusing on the variation of 3- position substituents of 2,4,5-imidazolidine-trione or 2-thioxo-4,5-imidazolidinedione, was designed and prepared in parallel by the reaction of various ureas or thioureas with oxalyl chloride using solution-phase technology. An interesting and, to the best of our knowledge, unprecedented finding is that a by-product of 1-phenyl-3-propylcarbodiimide was formed during the addition of oxalyl chloride into the solution of 1-phenyl-3-propylthiourea in the presence of triethylamine in dichloromethane. It has been shown that the herbicidal activity of 2,4,5-imidazolidinetriones is about the same as that of their analogous 2-thioxo-4,5-imidazolidinediones. Compound with propyl or isopropyl group at the 3- position of 2,4,5-imidazolidinetrione ring demonstrated good herbicidal activity. The most active compound, 1-(2-fluoro- 4-chloro-5-propargyloxy)-phenyl-3-propyl-2-thioxo-4,5-imidazolidinedione, gave 95% control of the growth of velvetleaf at 200 g/ha in the post-emergence test.
Massively parallel de novo protein design for targeted therapeutics.
Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J; Hicks, Derrick R; Vergara, Renan; Murapa, Patience; Bernard, Steffen M; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T; Koday, Merika T; Jenkins, Cody M; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M; Fernández-Velasco, D Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A; Fuller, Deborah H; Baker, David
2017-10-05
De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.
Massively parallel de novo protein design for targeted therapeutics
NASA Astrophysics Data System (ADS)
Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David
2017-10-01
De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.
Massively parallel de novo protein design for targeted therapeutics
Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David
2018-01-01
De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37–43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing. PMID:28953867
Relationship of Individual and Group Change: Ontogeny and Phylogeny in Biology.
ERIC Educational Resources Information Center
Gould, Steven Jay
1984-01-01
Considers the issue of parallels between ontogeny and phylogeny from an historical perspective. Discusses such parallels in relationship to two ontogenetic principles concerning recapitulation and sequence of stages. Differentiates between Piaget's use of the idea of recapitulation and Haeckel's biogenetic law. (Author/RH)
Molecular pathways to parallel evolution: I. Gene nexuses and their morphological correlates.
Zuckerkandl, E
1994-12-01
Aspects of the regulatory interactions among genes are probably as old as most genes are themselves. Correspondingly, similar predispositions to changes in such interactions must have existed for long evolutionary periods. Features of the structure and the evolution of the system of gene regulation furnish the background necessary for a molecular understanding of parallel evolution. Patently "unrelated" organs, such as the fat body of a fly and the liver of a mammal, can exhibit fractional homology, a fraction expected to become subject to quantitation. This also seems to hold for different organs in the same organism, such as wings and legs of a fly. In informational macromolecules, on the other hand, homology is indeed all or none. In the quite different case of organs, analogy is expected usually to represent attenuated homology. Many instances of putative convergence are likely to turn out to be predominantly parallel evolution, presumably including the case of the vertebrate and cephalopod eyes. Homology in morphological features reflects a similarity in networks of active genes. Similar nexuses of active genes can be established in cells of different embryological origins. Thus, parallel development can be considered a counterpart to parallel evolution. Specific macromolecular interactions leading to the regulation of the c-fos gene are given as an example of a "controller node" defined as a regulatory unit. Quantitative changes in gene control are distinguished from relational changes, and frequent parallelism in quantitative changes is noted in Drosophila enzymes. Evolutionary reversions in quantitative gene expression are also expected. The evolution of relational patterns is attributed to several distinct mechanisms, notably the shuffling of protein domains. The growth of such patterns may in part be brought about by a particular process of compensation for "controller gene diseases," a process that would spontaneously tend to lead to increased regulatory and organismal complexity. Despite the inferred increase in gene interaction complexity, whose course over evolutionary time is unknown, the number of homology groups for the functional and structural protein units designated as domains has probably remained rather constant, even as, in some of its branches, evolution moved toward "higher" organisms. In connection with this process, the question is raised of parallel evolution within the purview of activating and repressing master switches and in regard to the number of levels into which the hierarchies of genic master switches will eventually be resolved.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
Coupling between structure and liquids in a parallel stage space shuttle design
NASA Technical Reports Server (NTRS)
Kana, D. D.; Ko, W. L.; Francis, P. H.; Nagy, A.
1972-01-01
A study was conducted to determine the influence of liquid propellants on the dynamic loads for space shuttle vehicles. A parallel-stage configuration model was designed and tested to determine the influence of liquid propellants on coupled natural modes. A forty degree-of-freedom analytical model was also developed for predicting these modes. Currently available analytical models were used to represent the liquid contributions, even though coupled longitudinal and lateral motions are present in such a complex structure. Agreement between the results was found in the lower few modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.
Almost every computer architect dreams of achieving high system performance with low implementation costs. A multigauge machine can reconfigure its data-path width, provide parallelism, achieve better resource utilization, and sometimes can trade computational precision for increased speed. A simple experimental method is used here to capture the main characteristics of multigauging. The measurements indicate evidence of near-optimal speedups. Adapting these ideas in designing parallel processors incurs low costs and provides flexibility. Several operational aspects of designing a multigauge machine are discussed as well. Thus, this research reports the technical, economical, and operational feasibility studies of multigauging.
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.
1996-01-01
The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.
System software for the finite element machine
NASA Technical Reports Server (NTRS)
Crockett, T. W.; Knott, J. D.
1985-01-01
The Finite Element Machine is an experimental parallel computer developed at Langley Research Center to investigate the application of concurrent processing to structural engineering analysis. This report describes system-level software which has been developed to facilitate use of the machine by applications researchers. The overall software design is outlined, and several important parallel processing issues are discussed in detail, including processor management, communication, synchronization, and input/output. Based on experience using the system, the hardware architecture and software design are critiqued, and areas for further work are suggested.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
A Data Parallel Multizone Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1995-01-01
We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.
Is the thumb a fifth finger? A study of digit interaction during force production tasks
Olafsdottir, Halla; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
We studied indices of digit interaction in single- and multi-digit maximal voluntary contraction (MVC) tests when the thumb acted either in parallel or in opposition to the fingers. The peak force produced by the thumb was much higher when the thumb acted in opposition to the fingers and its share of the total force in the five-digit MVC test increased dramatically. The fingers showed relatively similar peak forces and unchanged sharing patterns in the four-finger MVC task when the thumb acted in parallel and in opposition to the fingers. Enslaving during one-digit tasks showed relatively mild differences between the two conditions, while the differences became large when enslaving was quantified for multi-digit tasks. Force deficit was pronounced when the thumb acted in parallel to the fingers; it showed a monotonic increase with the number of explicitly involved digits up to four digits and then a drop when all five digits were involved. Force deficit all but disappeared when the thumb acted in opposition to the fingers. However, for both thumb positions, indices of digit interaction were similar for groups of digits that did or did not include the thumb. These results suggest that, given a certain hand configuration, the central nervous system treats the thumb as a fifth finger. They provide strong support for the hypothesis that indices of digit interaction reflect neural factors, not the peripheral design of the hand. An earlier formal model was able to account for the data when the thumb acted in parallel to the fingers. However, it failed for the data with the thumb acting in opposition to the fingers. PMID:15322785
[Traffic-related PM2.5 regulates IL-2 releasing in Jurkat T cells by calcium signaling pathway].
Tong, Guoqiang; Zhang, Zhihong; Han, Jianbiao; Qiu, Yong; Xu, Jianjun
2013-09-01
To explore the effects of traffic-related PM2.5 on interleukin-2 (IL-2) in Jurkat T cells and the regulatory action of calcium signaling pathway. The cells were exposed to 100 microg/ml of PM2.5 for 3, 6 and 24 h. Normal saline group, blank filter group, calcium chelating agent EGTA group and the calcineurin antagonist cyclosporine A (CSA) group were as parallel control. The level of IL-2 was detected by ELISA kits, the mRNA expression of CaN, NFAT were determined by QRT-PCR. The nuclear distribution of NFAT was observed by immunofluorescence microscopy. The level of IL-2 in Jurkat T cells exposed to 100 microg/ml PM2.5 was significantly lower than parallel groups, but higher than PM2.5 + CSA group and PM2.5 + EGTA group (P < 0.05). With the increase of time, the releasing level of IL-2 appeared reducing trend in 100 microg/ml of PM2.5 group. The mRNA expression level of NFAT and CaN were higher than parallel groups, PM2.5 + CSA group and PM2.5 + EGTA group (P < 0.05). PM2.5 can induce NFAT protein with dephosphorylation and be activated, and NFAT protein can shift into nuclear. The level of IL-2 was negatively associated with the expression level of NFAT and CaN gene (P < 0.05). Traffic-related PM2.5 may inhibit the releasing of IL-2, Ca(2+)-CaN-NFAT signal pathway may involve in the regulation of IL-2.
The characteristics and limitations of the MPS/MMS battery charging system
NASA Technical Reports Server (NTRS)
Ford, F. E.; Palandati, C. F.; Davis, J. F.; Tasevoli, C. M.
1980-01-01
A series of tests was conducted on two 12 ampere hour nickel cadmium batteries under a simulated cycle regime using the multiple voltage versus temperature levels designed into the modular power system (MPS). These tests included: battery recharge as a function of voltage control level; temperature imbalance between two parallel batteries; a shorted or partially shorted cell in one of the two parallel batteries; impedance imbalance of one of the parallel battery circuits; and disabling and enabling one of the batteries from the bus at various charge and discharge states. The results demonstrate that the eight commandable voltage versus temperature levels designed into the MPS provide a very flexible system that not only can accommodate a wide range of normal power system operation, but also provides a high degree of flexibility in responding to abnormal operating conditions.
Update on Development of Mesh Generation Algorithms in MeshKit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay
2015-09-30
MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKitmore » are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.« less