Sample records for parallel alternative strategies

  1. Introduction to Computers: Parallel Alternative Strategies for Students. Course No. 0200000.

    ERIC Educational Resources Information Center

    Chauvenne, Sherry; And Others

    Parallel Alternative Strategies for Students (PASS) is a content-centered package of alternative methods and materials designed to assist secondary teachers to meet the needs of mainstreamed learning-disabled and emotionally-handicapped students of various achievement levels in the basic education content courses. This supplementary text and…

  2. Life Management Skills. Teacher's Guide [and Student Workbook]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Goldstein, Jeren; Walford, Sylvia

    This teacher's guide and student workbook are part of a series of supplementary curriculum packages presenting alternative methods and activities designed to meet the needs of Florida secondary students with mild disabilities or other special learning needs. The Life Management Skills PASS (Parallel Alternative Strategies for Students) teacher's…

  3. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  4. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by employing bandwidth shells at areas of overutilization

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-04-27

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a final destination. The default routing strategy is altered responsive to detection of overutilization of a particular path of one or more links, and at least some traffic is re-routed by distributing the traffic among multiple paths (which may include the default path). An alternative path may require a greater number of link traversals to reach the destination node.

  5. Economic Analysis of Alternative Strategies for Detection of ALK Rearrangements in Non Small Cell Lung Cancer.

    PubMed

    Doshi, Shivang; Ray, David; Stein, Karen; Zhang, Jie; Koduru, Prasad; Fogt, Franz; Wellman, Axel; Wat, Ricky; Mathews, Charles

    2016-01-06

    Identification of alterations in ALK gene and development of ALK-directed therapies have increased the need for accurate and efficient detection methodologies. To date, research has focused on the concordance between the two most commonly used technologies, fluorescent in situ hybridization (FISH) and immunohistochemistry (IHC). However, inter-test concordance reflects only one, albeit important, aspect of the diagnostic process; laboratories, hospitals, and payors must understand the cost and workflow of ALK rearrangement detection strategies. Through literature review combined with interviews of pathologists and laboratory directors in the U.S. and Europe, a cost-impact model was developed that compared four alternative testing strategies-IHC only, FISH only, IHC pre-screen followed by FISH confirmation, and parallel testing by both IHC and FISH. Interviews were focused on costs of reagents, consumables, equipment, and personnel. The resulting model showed that testing by IHC alone cost less ($90.07 in the U.S., $68.69 in Europe) than either independent or parallel testing by both FISH and IHC ($441.85 in the U.S. and $279.46 in Europe). The strategies differed in cost of execution, turnaround time, reimbursement, and number of positive results detected, suggesting that laboratories must weigh the costs and the clinical benefit of available ALK testing strategies.

  6. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  7. Economic Analysis of Alternative Strategies for Detection of ALK Rearrangements in Non Small Cell Lung Cancer

    PubMed Central

    Doshi, Shivang; Ray, David; Stein, Karen; Zhang, Jie; Koduru, Prasad; Fogt, Franz; Wellman, Axel; Wat, Ricky; Mathews, Charles

    2016-01-01

    Identification of alterations in ALK gene and development of ALK-directed therapies have increased the need for accurate and efficient detection methodologies. To date, research has focused on the concordance between the two most commonly used technologies, fluorescent in situ hybridization (FISH) and immunohistochemistry (IHC). However, inter-test concordance reflects only one, albeit important, aspect of the diagnostic process; laboratories, hospitals, and payors must understand the cost and workflow of ALK rearrangement detection strategies. Through literature review combined with interviews of pathologists and laboratory directors in the U.S. and Europe, a cost-impact model was developed that compared four alternative testing strategies—IHC only, FISH only, IHC pre-screen followed by FISH confirmation, and parallel testing by both IHC and FISH. Interviews were focused on costs of reagents, consumables, equipment, and personnel. The resulting model showed that testing by IHC alone cost less ($90.07 in the U.S., $68.69 in Europe) than either independent or parallel testing by both FISH and IHC ($441.85 in the U.S. and $279.46 in Europe). The strategies differed in cost of execution, turnaround time, reimbursement, and number of positive results detected, suggesting that laboratories must weigh the costs and the clinical benefit of available ALK testing strategies. PMID:26838801

  8. Economics. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Chambliss, Robert, Ed.; Fresen, Sue, Ed.

    This teacher's guide and student guide unit contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The curriculum correlates to Florida's Sunshine State Standards and is divided into the following six units of study: (1) introduction to economics,…

  9. World History--Part 2: Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Schaap, Eileen, Ed.; Fresen, Sue, Ed.

    This teacher's guide and student guide unit contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The materials differ from standard textbooks and workbooks in several ways: simplified text; smaller units of study; reduced vocabulary level;…

  10. Consumer Mathematics. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Walford, Sylvia B.; Thomas, Portia R.

    This teacher's guide and student guide are designed to accompany a consumer mathematics textbook that contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The materials are designed to help these students succeed in regular education content…

  11. Physical Science. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Danner, Greg, Ed.; Fresen, Sue, Ed.

    This teacher's guide and student guide unit contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The materials are designed to help these students succeed in regular education content courses and include simplified text and smaller units of…

  12. Earth/Space Science Course No. 2001310. [Student Guide and] Teacher's Guide.

    ERIC Educational Resources Information Center

    Atkinson, Missy

    These documents contain instructional materials for the Earth/Space Science curriculum designed by the Florida Department of Education. The student guide is adapted for students with disabilities or diverse learning needs. The content of Parallel Alternative Strategies for Students (PASS) differs from standard textbooks with its simplified text,…

  13. World History--Part 1. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Schaap, Eileen, Ed.; Fresen, Sue, Ed.

    This teacher's guide and student guide unit contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The unit focuses on world history and correlates to Florida's Sunshine State Standards. It is divided into the following 21 units of study that…

  14. Soil nitrogen transformations under alternative management strategies in Appalachian forests

    Treesearch

    T. Adam Coates; Ralph E.J. Boerner; Thomas A. Waldrop; Daniel A. Yaussy

    2008-01-01

    Once subject to frequent fire and strongly N limited, the forests of the Appalachian Mountain region of eastern North America have experienced almost a century of fire suppression, and changes in tree species composition, understory density and composition, and accumulations of detritus have paralleled the changes in fire frequency. In an effort to restore these...

  15. Alternative Therapeutic Strategies With the Urban Negro.

    ERIC Educational Resources Information Center

    Sarles, Harvey B.

    Social pressures in the United States are explained in the context of group identification and group behavior. The urban scene is made up of a number of groups, or subcultures, which have parallel structures along socio-economic, and nationality-color-ethnic lines. These groups act as if they had a structured plan. It is shown how this plan is…

  16. Effective contaminant detection networks in uncertain groundwater flow fields.

    PubMed

    Hudak, P F

    2001-01-01

    A mass transport simulation model tested seven contaminant detection-monitoring networks under a 40 degrees range of groundwater flow directions. Each monitoring network contained five wells located 40 m from a rectangular landfill. The 40-m distance (lag) was measured in different directions, depending upon the strategy used to design a particular monitoring network. Lagging the wells parallel to the central flow path was more effective than alternative design strategies. Other strategies allowed higher percentages of leaks to migrate between monitoring wells. Results of this study suggest that centrally lagged groundwater monitoring networks perform most effectively in uncertain groundwater-flow fields.

  17. Evaluating the performance of the particle finite element method in parallel architectures

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  18. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  19. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  20. Impact of using two dialyzers in parallel on phosphate clearance in hemodialysis patients: a randomized trial.

    PubMed

    Thompson, Stephanie; Manns, Braden; Lloyd, Anita; Hemmelgarn, Brenda; MacRae, Jennifer; Klarenbach, Scott; Unsworth, Larry; Courtney, Mark; Tonelli, Marcello

    2017-05-01

    Dietary restriction and phosphate binders are the main interventions used to manage hyperphosphatemia in people on hemodialysis, but have limited efficacy. Modifying conventional dialysis regimens to enhance phosphate clearance as an alternative approach remains relatively unstudied. This was a 10-week, 2-arm, randomized crossover study. Participants were prevalent dialysis patients ( n = 32) with consecutive serum phosphate levels >1.6 mmol/L and on stable doses of a phosphate binder. Following a 2-week run-in period, participants were randomized to initiate dialysis using two high flux dialyzers in parallel (blood flow ≥350 mL/min, dialysate flow 800 mL/min) or standard dialysis using one high flux dialyzer (blood flow ≥350 mL/min, dialysate flow of 800 mL/min). Each regimen was 3 weeks in duration. After a 2-week washout period, participants received the alternate regimen. The primary outcome was the mean difference in phosphate clearance by dialyzer strategy. Secondary outcomes were phosphate removal and pre-dialysis serum phosphate. Phosphate clearance for the double dialyzer strategy did not differ significantly from the single dialyzer strategy [mean difference 7.5 mL/min (95% confidence interval, 95% CI, -6.1, 21.0), P = 0.28]. There was no difference in total phosphate removal and pre-dialysis phosphate between the double and single dialyzer strategies [total phosphate removal mean difference -0.2 mmol (95% CI -4.1, 3.7), P = 0.93; pre-dialysis mean difference 0.01 mmol/L (95% CI -0.18, 0.21), P = 0.88]. There was no difference in the proportion of participants who experienced at least one episode of intradialytic hypotension (32 versus 47%, P = 0.13). A limitation of the study was frequent protocol deviations in the dialysis prescription. In this study, the use of two dialyzers in parallel did not increase phosphate clearance, phosphate removal or pre-dialysis serum phosphorus when compared with a standard dialysis treatment strategy. Future studies should continue to evaluate novel methods of phosphate removal using conventional hemodialysis. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  1. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization

    NASA Astrophysics Data System (ADS)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea

    2017-11-01

    Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.

  2. Parallel evolution of passive and active defence in land snails.

    PubMed

    Morii, Yuta; Prozorova, Larisa; Chiba, Satoshi

    2016-11-11

    Predator-prey interactions are major processes promoting phenotypic evolution. However, it remains unclear how predation causes morphological and behavioural diversity in prey species and how it might lead to speciation. Here, we show that substantial divergence in the phenotypic traits of prey species has occurred among closely related land snails as a result of adaptation to predator attacks. This caused the divergence of defensive strategies into two alternatives: passive defence and active defence. Phenotypic traits of the subarctic Karaftohelix land snail have undergone radiation in northeast Asia, and distinctive morphotypes generally coexist in the same regions. In these land snails, we documented two alternative defence behaviours against predation by malacophagous beetles. Furthermore, the behaviours are potentially associated with differences in shell morphology. In addition, molecular phylogenetic analyses indicated that these alternative strategies against predation arose independently on the islands and on the continent suggesting that anti-predator adaptation is a major cause of phenotypic diversity in these snails. Finally, we suggest the potential speciation of Karaftohelix snails as a result of the divergence of defensive strategies into passive and active behaviours and the possibility of species radiation due to anti-predatory adaptations.

  3. Topology-dependent density optima for efficient simultaneous network exploration

    NASA Astrophysics Data System (ADS)

    Wilson, Daniel B.; Baker, Ruth E.; Woodhouse, Francis G.

    2018-06-01

    A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimize this process: How many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimized at a searcher density that is well predicted by the spectral gap. Furthermore, we find that nonequilibrium processes, realized through the addition of bias, can support significantly increased density optima. Our results suggest alternative hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.

  4. Parallelizing alternating direction implicit solver on GPUs

    USDA-ARS?s Scientific Manuscript database

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  5. Contributions of dorsal striatal subregions to spatial alternation behavior.

    PubMed

    Moussa, Roula; Poucet, Bruno; Amalric, Marianne; Sargolini, Francesca

    2011-07-01

    Considerable evidence has shown a clear dissociation between the dorsomedial (DMS) and the dorsolateral (DLS) striatum in instrumental conditioning. In particular, DMS activity is necessary to form action-outcome associations, whereas the DLS is required for developing habitual behavior. However, few studies have investigated whether a similar dissociation exists in more complex goal-directed learning processes. The present study examined the role of the two structures in such complex learning by analyzing the effects of excitotoxic DMS and DLS lesions during the acquisition and extinction of spatial alternation behavior, in a continuous alternation T-maze task. We demonstrate that DMS and DLS lesions have opposite effects, the former impairing and the latter improving animal performance during learning and extinction. DMS lesions may impair the acquisition of spatial alternation behavior by disrupting the signal necessary to link a goal with a specific spatial sequence. In contrast, DLS lesions may accelerate goal-driven strategies by minimizing the influence of external stimuli on the response, thus increasing the impact of action-reward contingencies. Taken together, these results suggest that DMS- and DLS-mediated learning strategies develop in parallel and compete for the control of the behavioral response early in learning.

  6. Different Relative Orientation of Static and Alternative Magnetic Fields and Cress Roots Direction of Growth Changes Their Gravitropic Reaction

    NASA Astrophysics Data System (ADS)

    Sheykina, Nadiia; Bogatina, Nina

    The following variants of roots location relatively to static and alternative components of magnetic field were studied. At first variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed perpendicular to both two fields’ components and gravitation vector. At the variant the negative gravitropysm for cress roots was observed. At second variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed parallel to alternative magnetic field. At third variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed perpendicular to both two fields components and gravitation vector; At forth variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed parallel to static magnetic field. In all cases studied the alternative magnetic field frequency was equal to Ca ions cyclotron frequency. In 2, 3 and 4 variants gravitropism was positive. But the gravitropic reaction speeds were different. In second and forth variants the gravitropic reaction speed in error limits coincided with the gravitropic reaction speed under Earth’s conditions. At third variant the gravitropic reaction speed was slowed essentially.

  7. Simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii using excitation-emission matrix fluorescence coupled with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju

    2018-02-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.

  8. Parallel Implementation of Triangular Cellular Automata for Computing Two-Dimensional Elastodynamic Response on Arbitrary Domains

    NASA Astrophysics Data System (ADS)

    Leamy, Michael J.; Springer, Adam C.

    In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.

  9. 76 FR 38178 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ..., New York, New York 10045-0001: 1. Thomas H. Lee (Alternative) Fund VI, L.P., Thomas H. Lee (Alternative) Parallel Fund VI, L.P., Thomas H. Lee (Alternative) Parallel (DT) Fund VI, L.P., THL FBC Equity Investors, L.P., THL Advisors (Alternative) VI, L.P., Thomas H. Lee (Alternative) VI, Ltd., THL Managers VI...

  10. Multi-partitioning for ADI-schemes on message passing architectures

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1994-01-01

    A kind of discrete-operator splitting called Alternating Direction Implicit (ADI) has been found to be useful in simulating fluid flow problems. In particular, it is being used to study the effects of hot exhaust jets from high performance aircraft on landing surfaces. Decomposition techniques that minimize load imbalance and message-passing frequency are described. Three strategies that are investigated for implementing the NAS Scalar Penta-diagonal Parallel Benchmark (SP) are transposition, pipelined Gaussian elimination, and multipartitioning. The multipartitioning strategy, which was used on Ethernet, was found to be the most efficient, although it was considered only a moderate success because of Ethernet's limited communication properties. The efficiency derived largely from the coarse granularity of the strategy, which reduced latencies and allowed overlap of communication and computation.

  11. Requirements for implementing real-time control functional modules on a hierarchical parallel pipelined system

    NASA Technical Reports Server (NTRS)

    Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald

    1989-01-01

    Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.

  12. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  13. Parallels, How Many? Geometry Module for Use in a Mathematics Laboratory Setting.

    ERIC Educational Resources Information Center

    Brotherton, Sheila; And Others

    This is one of a series of geometry modules developed for use by secondary students in a laboratory setting. This module was conceived as an alternative approach to the usual practice of giving Euclid's parallel postulate and then mentioning that alternate postulates would lead to an alternate geometry or geometries. Instead, the student is led…

  14. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  15. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  16. Escaping Antiangiogenic Therapy: Strategies Employed by Cancer Cells

    PubMed Central

    Pinto, Mauricio P.; Sotomayor, Paula; Carrasco-Avino, Gonzalo; Corvalan, Alejandro H.; Owen, Gareth I.

    2016-01-01

    Tumor angiogenesis is widely recognized as one of the “hallmarks of cancer”. Consequently, during the last decades the development and testing of commercial angiogenic inhibitors has been a central focus for both basic and clinical cancer research. While antiangiogenic drugs are now incorporated into standard clinical practice, as with all cancer therapies, tumors can eventually become resistant by employing a variety of strategies to receive nutrients and oxygen in the event of therapeutic assault. Herein, we concentrate and review in detail three of the principal mechanisms of antiangiogenic therapy escape: (1) upregulation of compensatory/alternative pathways for angiogenesis; (2) vasculogenic mimicry; and (3) vessel co-option. We suggest that an understanding of how a cancer cell adapts to antiangiogenic therapy may also parallel the mechanisms employed in the bourgeoning tumor and isolated metastatic cells delivering responsible for residual disease. Finally, we speculate on strategies to adapt antiangiogenic therapy for future clinical uses. PMID:27608016

  17. Escaping Antiangiogenic Therapy: Strategies Employed by Cancer Cells.

    PubMed

    Pinto, Mauricio P; Sotomayor, Paula; Carrasco-Avino, Gonzalo; Corvalan, Alejandro H; Owen, Gareth I

    2016-09-06

    Tumor angiogenesis is widely recognized as one of the "hallmarks of cancer". Consequently, during the last decades the development and testing of commercial angiogenic inhibitors has been a central focus for both basic and clinical cancer research. While antiangiogenic drugs are now incorporated into standard clinical practice, as with all cancer therapies, tumors can eventually become resistant by employing a variety of strategies to receive nutrients and oxygen in the event of therapeutic assault. Herein, we concentrate and review in detail three of the principal mechanisms of antiangiogenic therapy escape: (1) upregulation of compensatory/alternative pathways for angiogenesis; (2) vasculogenic mimicry; and (3) vessel co-option. We suggest that an understanding of how a cancer cell adapts to antiangiogenic therapy may also parallel the mechanisms employed in the bourgeoning tumor and isolated metastatic cells delivering responsible for residual disease. Finally, we speculate on strategies to adapt antiangiogenic therapy for future clinical uses.

  18. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  19. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  20. Informatics for RNA Sequencing: A Web Resource for Analysis on the Cloud

    PubMed Central

    Griffith, Malachi; Walker, Jason R.; Spies, Nicholas C.; Ainscough, Benjamin J.; Griffith, Obi L.

    2015-01-01

    Massively parallel RNA sequencing (RNA-seq) has rapidly become the assay of choice for interrogating RNA transcript abundance and diversity. This article provides a detailed introduction to fundamental RNA-seq molecular biology and informatics concepts. We make available open-access RNA-seq tutorials that cover cloud computing, tool installation, relevant file formats, reference genomes, transcriptome annotations, quality-control strategies, expression, differential expression, and alternative splicing analysis methods. These tutorials and additional training resources are accompanied by complete analysis pipelines and test datasets made available without encumbrance at www.rnaseq.wiki. PMID:26248053

  1. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  2. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  3. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  4. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  5. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  6. Parallel Note-Taking: A Strategy for Effective Use of Webnotes

    ERIC Educational Resources Information Center

    Pardini, Eleanor A.; Domizi, Denise P.; Forbes, Daniel A.; Pettis, Gretchen V.

    2005-01-01

    Many instructors supply online lecture notes but little attention has been given to how students can make the best use of this resource. Based on observations of student difficulties with these notes, a strategy called parallel note-taking was developed for using online notes. The strategy is a hybrid of research-proven strategies for effective…

  7. Implementation of a fully-balanced periodic tridiagonal solver on a parallel distributed memory architecture

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Erlebacher, G.

    1994-01-01

    While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.

  8. Methodologies and Tools for Tuning Parallel Programs: 80% Art, 20% Science, and 10% Luck

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Bailey, David (Technical Monitor)

    1996-01-01

    The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessors. However, without effective means to monitor (and analyze) program execution, tuning the performance of parallel programs becomes exponentially difficult as program complexity and machine size increase. In the past few years, the ubiquitous introduction of performance tuning tools from various supercomputer vendors (Intel's ParAide, TMC's PRISM, CRI's Apprentice, and Convex's CXtrace) seems to indicate the maturity of performance instrumentation/monitor/tuning technologies and vendors'/customers' recognition of their importance. However, a few important questions remain: What kind of performance bottlenecks can these tools detect (or correct)? How time consuming is the performance tuning process? What are some important technical issues that remain to be tackled in this area? This workshop reviews the fundamental concepts involved in analyzing and improving the performance of parallel and heterogeneous message-passing programs. Several alternative strategies will be contrasted, and for each we will describe how currently available tuning tools (e.g. AIMS, ParAide, PRISM, Apprentice, CXtrace, ATExpert, Pablo, IPS-2) can be used to facilitate the process. We will characterize the effectiveness of the tools and methodologies based on actual user experiences at NASA Ames Research Center. Finally, we will discuss their limitations and outline recent approaches taken by vendors and the research community to address them.

  9. Signal-domain optimization metrics for MPRAGE RF pulse design in parallel transmission at 7 tesla.

    PubMed

    Gras, V; Vignaud, A; Mauconduit, F; Luong, M; Amadon, A; Le Bihan, D; Boulant, N

    2016-11-01

    Standard radiofrequency pulse design strategies focus on minimizing the deviation of the flip angle from a target value, which is sufficient but not necessary for signal homogeneity. An alternative approach, based directly on the signal, here is proposed for the MPRAGE sequence, and is developed in the parallel transmission framework with the use of the k T -points parametrization. The flip angle-homogenizing and the proposed methods were investigated numerically under explicit power and specific absorption rate constraints and tested experimentally in vivo on a 7 T parallel transmission system enabling real time local specific absorption rate monitoring. Radiofrequency pulse performance was assessed by a careful analysis of the signal and contrast between white and gray matter. Despite a slight reduction of the flip angle uniformity, an improved signal and contrast homogeneity with a significant reduction of the specific absorption rate was achieved with the proposed metric in comparison with standard pulse designs. The proposed joint optimization of the inversion and excitation pulses enables significant reduction of the specific absorption rate in the MPRAGE sequence while preserving image quality. The work reported thus unveils a possible direction to increase the potential of ultra-high field MRI and parallel transmission. Magn Reson Med 76:1431-1442, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.

  10. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  11. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  12. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  13. A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains

    NASA Astrophysics Data System (ADS)

    Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.

    2018-02-01

    A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.

  14. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  15. Research on Parallel Three Phase PWM Converters base on RTDS

    NASA Astrophysics Data System (ADS)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  16. Studying the effects of genistein on gene expression of fish embryos as an alternative testing approach for endocrine disruption.

    PubMed

    Schiller, Viktoria; Wichmann, Arne; Kriehuber, Ralf; Muth-Köhne, Elke; Giesy, John P; Hecker, Markus; Fenske, Martina

    2013-01-01

    Assessment of endocrine disruption currently relies on testing strategies involving adult vertebrates. In order to minimize the use of animal tests according to the 3Rs principle of replacement, reduction and refinement, we propose a transcriptomics and fish embryo based approach as an alternative to identify and analyze an estrogenic activity of environmental chemicals. For this purpose, the suitability of 48 h and 7 days post-fertilization zebrafish and medaka embryos to test for estrogenic disruption was evaluated. The embryos were exposed to the phytoestrogen genistein and subsequently analyzed by microarrays and quantitative real-time PCR. The functional analysis showed that the genes affected related to multiple metabolic and signaling pathways in the early fish embryo, which reflect the known components of genistein's mode of actions, like apoptosis, estrogenic response, hox gene expression and steroid hormone synthesis. Moreover, the transcriptomic data also suggested a thyroidal mode of action and disruption of the nervous system development. The parallel testing of two fish species provided complementary data on the effects of genistein at gene expression level and facilitated the separation of common from species-dependent effects. Overall, the study demonstrated that combining fish embryo testing with transcriptomics can deliver abundant information about the mechanistic effects of endocrine disrupting chemicals, rendering this strategy a promising alternative approach to test for endocrine disruption in a whole organism in-vitro scale system. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  18. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  19. Teaching RLC Parallel Circuits in High-School Physics Class

    ERIC Educational Resources Information Center

    Simon, Alpár

    2015-01-01

    This paper will try to give an alternative treatment of the subject "parallel RLC circuits" and "resonance in parallel RLC circuits" from the Physics curricula for the XIth grade from Romanian high-schools, with an emphasis on practical type circuits and their possible applications, and intends to be an aid for both Physics…

  20. Increased phospho-AKT is associated with loss of the androgen receptor during the progression of N-methyl-N-nitrosourea-induced prostate carcinogenesis in rats.

    PubMed

    Liao, Zhiming; Wang, Shihua; Boileau, Thomas W-M; Erdman, John W; Clinton, Steven K

    2005-07-01

    Characterization of molecular events during N-methyl-N-nitrosourea (MNU)-induced rat prostate carcinogenesis enhances the utility of this model for the preclinical assessment of preventive strategies. Androgen independence is typical of advanced human prostate cancer and may occur through multiple mechanisms including the loss of androgen receptor (AR) expression and the activation of alternative signaling pathways. We examined the interrelationships between AR and p-AKT expression by immunohistochemical staining during MNU-androgen-induced prostate carcinogenesis in male Wistar-Unilever rats. Histone nuclear staining and image analysis was employed to assess parallel changes in chromatin and nuclear structure. The percentage of AR positive nuclei decreased (P < 0.01) as carcinogenesis progressed: hyperplasia (92%), atypical hyperplasia (92%), well-differentiated adenocarcinoma (57%), moderately-differentiated adenocarcinoma (19%), and poorly-differentiated adenocarcinoma (10%). Conversely, p-AKT staining increased significantly during carcinogenesis. Sparse staining was observed in normal tissues (0.2% of epithelial area) and hyperplastic lesions (0.1%), while expression increased significantly (P < 0.001) in atypical hyperplasia (7.6%), well-differentiated adenocarcinoma (16.7%), moderately-differentiated adenocarcinoma (19.6%), and poorly-differentiated adenocarcinoma (17.4%). In parallel, nuclear morphometry revealed increased nuclear size, greater irregularity, and lower DNA compactness as cancers became more poorly differentiated. In the MNU model, the progressive evolution of dominant tumor cell populations showing an increase in p-AKT in parallel with a decline in AR staining suggests that activation of AKT signaling may be one of several mechanisms contributing to androgen insensitivity during prostate cancer progression. Our observations mimic findings suggested by human studies and support the relevance of the MNU model in preclinical studies of preventive strategies. (c) 2005 Wiley-Liss, Inc.

  1. Unique Study Designs in Nephrology: N-of-1 Trials and Other Designs.

    PubMed

    Samuel, Joyce P; Bell, Cynthia S

    2016-11-01

    Alternatives to the traditional parallel-group trial design may be required to answer clinical questions in special populations, rare conditions, or with limited resources. N-of-1 trials are a unique trial design which can inform personalized evidence-based decisions for the patient when data from traditional clinical trials are lacking or not generalizable. A concise overview of factorial design, cluster randomization, adaptive designs, crossover studies, and n-of-1 trials will be provided along with pertinent examples in nephrology. The indication for analysis strategies such as equivalence and noninferiority trials will be discussed, as well as analytic pitfalls. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  2. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  3. Interactions of spatial strategies producing generalization gradient and blocking: A computational approach

    PubMed Central

    Dollé, Laurent; Chavarriaga, Ricardo

    2018-01-01

    We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600

  4. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  5. Computational efficiency of parallel combinatorial OR-tree searches

    NASA Technical Reports Server (NTRS)

    Li, Guo-Jie; Wah, Benjamin W.

    1990-01-01

    The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.

  6. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  7. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  8. Long-time atomistic simulations with the Parallel Replica Dynamics method

    NASA Astrophysics Data System (ADS)

    Perez, Danny

    Molecular Dynamics (MD) -- the numerical integration of atomistic equations of motion -- is a workhorse of computational materials science. Indeed, MD can in principle be used to obtain any thermodynamic or kinetic quantity, without introducing any approximation or assumptions beyond the adequacy of the interaction potential. It is therefore an extremely powerful and flexible tool to study materials with atomistic spatio-temporal resolution. These enviable qualities however come at a steep computational price, hence limiting the system sizes and simulation times that can be achieved in practice. While the size limitation can be efficiently addressed with massively parallel implementations of MD based on spatial decomposition strategies, allowing for the simulation of trillions of atoms, the same approach usually cannot extend the timescales much beyond microseconds. In this article, we discuss an alternative parallel-in-time approach, the Parallel Replica Dynamics (ParRep) method, that aims at addressing the timescale limitation of MD for systems that evolve through rare state-to-state transitions. We review the formal underpinnings of the method and demonstrate that it can provide arbitrarily accurate results for any definition of the states. When an adequate definition of the states is available, ParRep can simulate trajectories with a parallel speedup approaching the number of replicas used. We demonstrate the usefulness of ParRep by presenting different examples of materials simulations where access to long timescales was essential to access the physical regime of interest and discuss practical considerations that must be addressed to carry out these simulations. Work supported by the United States Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.

  9. Factors related to the parallel use of complementary and alternative medicine with conventional medicine among patients with chronic conditions in South Korea.

    PubMed

    Choi, Byunghee; Han, Dongwoon; Na, Seonsam; Lim, Byungmook

    2017-06-01

    This study aims to examine the characteristics and behavioral patterns of patients with chronic conditions behind their parallel use of the conventional medicine (CM) and the complementary and alternative medicine (CAM) that includes traditional Korean Medicine (KM). This cross-sectional study used the self-administered anonymous survey method to obtain the results from inpatients who were staying in three hospitals in Gyeongnam province in Korea. Of the 423 participants surveyed, 334 participants (79.0%) used some form of CAM among which KM therapies were the most common modalities. The results of a logistic regression analysis showed that the parallel use pattern was most apparent in the groups aged over 40. Patients with hypertension or joint diseases were seen to have higher propensity to show the parallel use patterns, whereas patients with diabetes were not. In addition, many sociodemographic and health-related characteristics are related to the patterns of the parallel use of CAM and CM. In the rural area of Korea, most inpatients who used CM for the management of chronic conditions used CAM in parallel. KM was the most common in CAM modalities, and the aspect of parallel use varied according to the disease conditions.

  10. A communication library for the parallelization of air quality models on structured grids

    NASA Astrophysics Data System (ADS)

    Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian

    PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.

  11. A visual parallel-BCI speller based on the time-frequency coding strategy.

    PubMed

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min(-1), with an average of 54.0 bit min(-1) and 43.0 bit min(-1) in the three rounds and five rounds, respectively. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  12. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

  13. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamically adjusting local routing strategies

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-03-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.

  14. Decision making by superimposing information from parallel cognitive channels

    NASA Astrophysics Data System (ADS)

    Aityan, Sergey K.

    1993-08-01

    A theory of decision making with perception through parallel information channels is presented. Decision making is considered a parallel competitive process. Every channel can provide confirmation or rejection of a decision concept. Different channels provide different impact on the specific concepts caused by the goals and individual cognitive features. All concepts are divided into semantic clusters due to the goals and the system defaults. The clusters can be alternative or complimentary. The 'winner-take-all' concept nodes firing takes place within the alternative cluster. Concepts can be independently activated in the complimentary cluster. A cognitive channel affects a decision concept by sending an activating or inhibitory signal. The complimentary clusters serve for building up complex concepts by superimposing activation received from various channels. The decision making is provided by the alternative clusters. Every active concept in the alternative cluster tends to suppress the competitive concepts in the cluster by sending inhibitory signals to the other nodes of the cluster. The model accounts for a time delay in signal transmission between the nodes and explains decreasing of the reaction time if information is confirmed by different channels and increasing of the reaction time if deceiving information received from the channels.

  15. Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data

    NASA Astrophysics Data System (ADS)

    Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.

    2011-09-01

    Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.

  16. STRATEGIES AND TECHNOLOGY FOR MANAGING HIGH-CARBON ASH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Hurt; Eric Suuberg; John Veranth

    2004-02-13

    The overall objective of the present project was to identify and assess strategies and solutions for the management of industry problems related to carbon in ash. Specific issues addressed included: (1) the effect of parent fuel selection on ash properties and adsorptivity, including a first ever examination of the air entrainment behavior of ashes from alternative (non-coal) fuels; (2) the effect of various low-NOx firing modes on ash properties and adsorptivity based on pilot-plant studies; and (3) the kinetics and mechanism of ash ozonation. This laboratory data has provided scientific and engineering support and underpinning for parallel process development activities.more » The development work on the ash ozonation process has now transitioned into a scale-up and commercialization project involving a multi-industry team and scheduled to begin in 2004. This report describes and documents the laboratory and pilot-scale work in the above three areas done at Brown University and the University of Utah during this three-year project.« less

  17. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    NASA Astrophysics Data System (ADS)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  18. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  19. Architectural study of the design and operation of advanced force feedback manual controllers

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Kim, Whee-Kuk

    1990-01-01

    A teleoperator system consists of a manual controller, control hardware/software, and a remote manipulator. It was employed in either hazardous or unstructured, and/or remote environments. In teleoperation, the main-in-the-loop is the central concept that brings human intelligence to the teleoperator system. When teleoperation involves contact with an uncertain environment, providing the feeling of telepresence to the human operator is one of desired characteristics of the teleoperator system. Unfortunately, most available manual controllers in bilateral or force-reflecting teleoperator systems can be characterized by their bulky size, high costs, or lack of smoothness and transparency, and elementary architectures. To investigate other alternatives, a force-reflecting, 3 degree of freedom (dof) spherical manual controller is designed, analyzed, and implemented as a test bed demonstration in this research effort. To achieve an improved level of design to meet criteria such as compactness, portability, and a somewhat enhanced force-reflecting capability, the demonstration manual controller employs high gear-ratio reducers. To reduce the effects of the inertia and friction on the system, various force control strategies are applied and their performance investigated. The spherical manual controller uses a parallel geometry to minimize inertial and gravitational effects on its primary task of transparent information transfer. As an alternative to the spherical 3-dof manual controller, a new conceptual (or parallel) spherical 3-dof module is introduced with a full kinematic analysis. Also, the resulting kinematic properties are compared to those of other typical spherical 3-dof systems. The conceptual design of a parallel 6-dof manual controller and its kinematic analysis is presented. This 6-dof manual controller is similar to the Stewart Platform with the actuators located on the base to minimize the dynamic effects. Finally, a combination of the new 3-dof and 6-dof concepts is presented as a feasible test-bed for enhanced performance in a 9-dof system.

  20. 76 FR 20750 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-13

    ... on the Exchange's Internet Web site at http://www.directedge.com . \\3\\ A Member is any registered... strategy to the ROUD/ROUE routing strategies is Parallel D or Parallel 2D with the DRT (Dark routing... one method. The Commission will post all comments on the Commission's Internet Web site ( http://www...

  1. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  2. Antimicrobial Resistance: Its Surveillance, Impact, and Alternative Management Strategies in Dairy Animals

    PubMed Central

    Sharma, Chetan; Rokana, Namita; Chandra, Mudit; Singh, Brij Pal; Gulhane, Rohini Devidas; Gill, Jatinder Paul Singh; Ray, Pallab; Puniya, Anil Kumar; Panwar, Harsh

    2018-01-01

    Antimicrobial resistance (AMR), one among the most common priority areas identified by both national and international agencies, is mushrooming as a silent pandemic. The advancement in public health care through introduction of antibiotics against infectious agents is now being threatened by global development of multidrug-resistant strains. These strains are product of both continuous evolution and un-checked antimicrobial usage (AMU). Though antibiotic application in livestock has largely contributed toward health and productivity, it has also played significant role in evolution of resistant strains. Although, a significant emphasis has been given to AMR in humans, trends in animals, on other hand, are not much emphasized. Dairy farming involves surplus use of antibiotics as prophylactic and growth promoting agents. This non-therapeutic application of antibiotics, their dosage, and withdrawal period needs to be re-evaluated and rationally defined. A dairy animal also poses a serious risk of transmission of resistant strains to humans and environment. Outlining the scope of the problem is necessary for formulating and monitoring an active response to AMR. Effective and commendably connected surveillance programs at multidisciplinary level can contribute to better understand and minimize the emergence of resistance. Besides, it requires a renewed emphasis on investments into research for finding alternate, safe, cost effective, and innovative strategies, parallel to discovery of new antibiotics. Nevertheless, numerous direct or indirect novel approaches based on host–microbial interaction and molecular mechanisms of pathogens are also being developed and corroborated by researchers to combat the threat of resistance. This review places a concerted effort to club the current outline of AMU and AMR in dairy animals; ongoing global surveillance and monitoring programs; its impact at animal human interface; and strategies for combating resistance with an extensive overview on possible alternates to current day antibiotics that could be implemented in livestock sector. PMID:29359135

  3. Antimicrobial Resistance: Its Surveillance, Impact, and Alternative Management Strategies in Dairy Animals.

    PubMed

    Sharma, Chetan; Rokana, Namita; Chandra, Mudit; Singh, Brij Pal; Gulhane, Rohini Devidas; Gill, Jatinder Paul Singh; Ray, Pallab; Puniya, Anil Kumar; Panwar, Harsh

    2017-01-01

    Antimicrobial resistance (AMR), one among the most common priority areas identified by both national and international agencies, is mushrooming as a silent pandemic. The advancement in public health care through introduction of antibiotics against infectious agents is now being threatened by global development of multidrug-resistant strains. These strains are product of both continuous evolution and un-checked antimicrobial usage (AMU). Though antibiotic application in livestock has largely contributed toward health and productivity, it has also played significant role in evolution of resistant strains. Although, a significant emphasis has been given to AMR in humans, trends in animals, on other hand, are not much emphasized. Dairy farming involves surplus use of antibiotics as prophylactic and growth promoting agents. This non-therapeutic application of antibiotics, their dosage, and withdrawal period needs to be re-evaluated and rationally defined. A dairy animal also poses a serious risk of transmission of resistant strains to humans and environment. Outlining the scope of the problem is necessary for formulating and monitoring an active response to AMR. Effective and commendably connected surveillance programs at multidisciplinary level can contribute to better understand and minimize the emergence of resistance. Besides, it requires a renewed emphasis on investments into research for finding alternate, safe, cost effective, and innovative strategies, parallel to discovery of new antibiotics. Nevertheless, numerous direct or indirect novel approaches based on host-microbial interaction and molecular mechanisms of pathogens are also being developed and corroborated by researchers to combat the threat of resistance. This review places a concerted effort to club the current outline of AMU and AMR in dairy animals; ongoing global surveillance and monitoring programs; its impact at animal human interface; and strategies for combating resistance with an extensive overview on possible alternates to current day antibiotics that could be implemented in livestock sector.

  4. Steer-PROP: a GRASE-PROPELLER sequence with interecho steering gradient pulses.

    PubMed

    Srinivasan, Girish; Rangwala, Novena; Zhou, Xiaohong Joe

    2018-05-01

    This study demonstrates a novel PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) pulse sequence, termed Steer-PROP, based on gradient and spin echo (GRASE), to reduce the imaging times and address phase errors inherent to GRASE. The study also illustrates the feasibility of using Steer-PROP as an alternative to single-shot echo planar imaging (SS-EPI) to produce distortion-free diffusion images in all imaging planes. Steer-PROP uses a series of blip gradient pulses to produce N (N = 3-5) adjacent k-space blades in each repetition time, where N is the number of gradient echoes in a GRASE sequence. This sampling strategy enables a phase correction algorithm to systematically address the GRASE phase errors as well as the motion-induced phase inconsistency. Steer-PROP was evaluated on phantoms and healthy human subjects at both 1.5T and 3.0T for T 2 - and diffusion-weighted imaging. Steer-PROP produced similar image quality as conventional PROPELLER based on fast spin echo (FSE), while taking only a fraction (e.g., 1/3) of the scan time. The robustness against motion in Steer-PROP was comparable to that of FSE-based PROPELLER. Using Steer-PROP, high quality and distortion-free diffusion images were obtained from human subjects in all imaging planes, demonstrating a considerable advantage over SS-EPI. The proposed Steer-PROP sequence can substantially reduce the scan times compared with FSE-based PROPELLER while achieving adequate image quality. The novel k-space sampling strategy in Steer-PROP not only enables an integrated phase correction method that addresses various sources of phase errors, but also minimizes the echo spacing compared with alternative sampling strategies. Steer-PROP can also be a viable alternative to SS-EPI to decrease image distortion in all imaging planes. Magn Reson Med 79:2533-2541, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  6. Diverse strategies of O2 usage for preventing photo-oxidative damage under CO2 limitation during algal photosynthesis.

    PubMed

    Shimakawa, Ginga; Matsuda, Yusuke; Nakajima, Kensuke; Tamoi, Masahiro; Shigeoka, Shigeru; Miyake, Chikahiro

    2017-01-20

    Photosynthesis produces chemical energy from photon energy in the photosynthetic electron transport and assimilates CO 2 using the chemical energy. Thus, CO 2 limitation causes an accumulation of excess energy, resulting in reactive oxygen species (ROS) which can cause oxidative damage to cells. O 2 can be used as an alternative energy sink when oxygenic phototrophs are exposed to high light. Here, we examined the responses to CO 2 limitation and O 2 dependency of two secondary algae, Euglena gracilis and Phaeodactylum tricornutum. In E. gracilis, approximately half of the relative electron transport rate (ETR) of CO 2 -saturated photosynthesis was maintained and was uncoupled from photosynthesis under CO 2 limitation. The ETR showed biphasic dependencies on O 2 at high and low O 2 concentrations. Conversely, in P. tricornutum, most relative ETR decreased in parallel with the photosynthetic O 2 evolution rate in response to CO 2 limitation. Instead, non-photochemical quenching was strongly activated under CO 2 limitation in P. tricornutum. The results indicate that these secondary algae adopt different strategies to acclimatize to CO 2 limitation, and that both strategies differ from those utilized by cyanobacteria and green algae. We summarize the diversity of strategies for prevention of photo-oxidative damage under CO 2 limitation in cyanobacterial and algal photosynthesis.

  7. Sidewall containment of liquid metal with horizontal alternating magnetic fields

    DOEpatents

    Pareg, Walter F.

    1990-01-01

    An apparatus for confining molten metal with a horizontal alternating magnetic field. In particular, this invention employs a magnet that can produce a horizontal alternating magnetic field to confine a molten metal at the edges of parallel horizontal rollers as a solid metal sheet is cast by counter-rotation of the rollers.

  8. Sidewall containment of liquid metal with horizontal alternating magnetic fields

    DOEpatents

    Praeg, Walter F.

    1995-01-01

    An apparatus for confining molten metal with a horizontal alternating magnetic field. In particular, this invention employs a magnet that can produce a horizontal alternating magnetic field to confine a molten metal at the edges of parallel horizontal rollers as a solid metal sheet is cast by counter-rotation of the rollers.

  9. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  10. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

  11. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  12. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  13. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  14. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  15. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  16. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  17. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  18. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  19. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  20. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  1. Sidewall containment of liquid metal with horizontal alternating magnetic fields

    DOEpatents

    Praeg, W.F.

    1995-01-31

    An apparatus is disclosed for confining molten metal with a horizontal alternating magnetic field. In particular, this invention employs a magnet that can produce a horizontal alternating magnetic field to confine a molten metal at the edges of parallel horizontal rollers as a solid metal sheet is cast by counter-rotation of the rollers. 19 figs.

  2. Two Oral Midazolam Preparations in Pediatric Dental Patients: A Prospective Randomised Clinical Trial

    PubMed Central

    Kamranzadeh, Shaqayegh; Kousha, Maryam; Shaeghi, Shahnaz; AbdollahGorgi, Fatemeh

    2015-01-01

    Pharmacological sedation is an alternative behavior management strategy in pediatric dentistry. The aim of this study was to compare the behavioral and physiologic effects of “commercially midazolam syrup” versus “orally administered IV midazolam dosage form (extemporaneous midazolam (EF))” in uncooperative pediatric dental patients. Eighty-eight children between 4 to 7 years of age received 0.2–0.5 mg/kg midazolam in this parallel trial. Physiologic parameters were recorded at baseline and every 15 minutes. Behavior assessment was conducted objectively by Houpt scale throughout the sedation and North Carolina at baseline and during injection and cavity preparation. No significant difference in behavior was noted by Houpt or North Carolina scale. Acceptable behavior (excellent, very good, and good) was observed in 90.9% of syrup and 79.5% of EF subjects, respectively. Physiological parameters remained in normal range without significant difference between groups and no adverse effect was observed. It is concluded that EF midazolam preparation can be used as an acceptable alternative to midazolam syrup. PMID:26120325

  3. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  4. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  5. Interplay between pro-inflammatory cytokines and growth factors in depressive illnesses

    PubMed Central

    Audet, Marie-Claude; Anisman, Hymie

    2013-01-01

    The development of depressive disorders had long been attributed to monoamine variations, and pharmacological treatment strategies likewise focused on methods of altering monoamine availability. However, the limited success achieved by treatments that altered these processes spurred the search for alternative mechanisms and treatments. Here we provide a brief overview concerning a possible role for pro-inflammatory cytokines and growth factors in major depression, as well as the possibility of targeting these factors in treating this disorder. The data suggest that focusing on one or another cytokine or growth factor might be counterproductive, especially as these factors may act sequentially or in parallel in affecting depressive disorders. It is also suggested that cytokines and growth factors might be useful biomarkers for individualized treatments of depressive illnesses. PMID:23675319

  6. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  7. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    NASA Astrophysics Data System (ADS)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.

  8. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  9. An Alternative Methodology for Creating Parallel Test Forms Using the IRT Information Function.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…

  10. Mountain Plains Learning Experience Guide: Radio and T.V. Repair. Course: A.C. Circuits.

    ERIC Educational Resources Information Center

    Hoggatt, P.; And Others

    One of four individualized courses included in a radio and television repair curriculum, this course focuses on alternating current relationships and computations, transformers, power supplies, series and parallel resistive-reactive circuits, and series and parallel resonance. The course is comprised of eight units: (1) Introduction to Alternating…

  11. Targeted parallel sequencing of the Musa species: searching for an alternative model system for polyploidy studies

    USDA-ARS?s Scientific Manuscript database

    Modern day genomics holds the promise of solving the complexities of basic plant sciences, and of catalyzing practical advances in plant breeding. While contiguous, "base perfect" deep sequencing is a key module of any genome project, recent advances in parallel next generation sequencing technologi...

  12. The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2011-01-01

    In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…

  13. Sensitivity and Specificity of Human Immunodeficiency Virus Rapid Serologic Assays and Testing Algorithms in an Antenatal Clinic in Abidjan, Ivory Coast

    PubMed Central

    Koblavi-Dème, Stéphania; Maurice, Chantal; Yavo, Daniel; Sibailly, Toussaint S.; N′guessan, Kabran; Kamelan-Tano, Yvonne; Wiktor, Stefan Z.; Roels, Thierry H.; Chorba, Terence; Nkengasong, John N.

    2001-01-01

    To evaluate serologic testing algorithms for human immunodeficiency virus (HIV) based on a combination of rapid assays among persons with HIV-1 (non-B subtypes) infection, HIV-2 infection, and HIV-1–HIV-2 dual infections in Abidjan, Ivory Coast, a total of 1,216 sera with known HIV serologic status were used to evaluate the sensitivity and specificity of four rapid assays: Determine HIV-1/2, Capillus HIV-1/HIV-2, HIV-SPOT, and Genie II HIV-1/HIV-2. Two serum panels obtained from patients recently infected with HIV-1 subtypes B and non-B were also included. Based on sensitivity and specificity, three of the four rapid assays were evaluated prospectively in parallel (serum samples tested by two simultaneous rapid assays) and serial (serum samples tested by two consecutive rapid assays) testing algorithms. All assays were 100% sensitive, and specificities ranged from 99.4 to 100%. In the prospective evaluation, both the parallel and serial algorithms were 100% sensitive and specific. Our results suggest that rapid assays have high sensitivity and specificity and, when used in parallel or serial testing algorithms, yield results similar to those of enzyme-linked immunosorbent assay-based testing strategies. HIV serodiagnosis based on rapid assays may be a valuable alternative in implementing HIV prevention and surveillance programs in areas where sophisticated laboratories are difficult to establish. PMID:11325995

  14. Parallels between Learning Disabilities and Fetal Alcohol Syndrome/Effect: No Need To Reinvent the Wheel.

    ERIC Educational Resources Information Center

    Johnson, Carol L.; Lapadat, Judith C.

    2000-01-01

    A survey of the research and practice literatures on learning disabilities and on Fetal Alcohol Syndrome/Effect revealed parallels in learning characteristics, as well as in the recommended interventions. Based on these parallels, an adolescent with Fetal Alcohol received intervention. Teaching strategies for students with learning disabilities…

  15. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  16. The Vanguard Faculty program: research training for complementary and alternative medicine faculty.

    PubMed

    Connelly, Erin N; Elmer, Patricia J; Morris, Cynthia D; Zwickey, Heather

    2010-10-01

    The increasing use of complementary and alternative medicine (CAM) treatment is paralleled by a growing demand for an evidence-based approach to CAM practice. In 2007, the Helfgott Research Institute at the National College of Natural Medicine (NCNM), in partnership with Oregon Health & Science University (OHSU), both in Portland, OR, began a National Institutes of Health-funded initiative to increase the quality and quantity of evidence-based medicine (EBM) content in the curricula at NCNM. One key strategy of the Research in Complementary and Alternative Medicine Program (R-CAMP) initiative was to create a faculty development program that included four components: intensive training in EBM; professional skills enhancement; peer and mentored support; and, ultimately, utilization of these skills to incorporate EBM into the curricula. This initiative is centered on a core group of faculty at NCNM, called the Vanguard Faculty, who receives early, intensive training in EBM and works to incorporate this training into classes. Training consists of an intensive, week-long course, monthly group meetings, and periodic individualized meetings. Vanguard Faculty members also receive mentorship and access to resources to pursue individualized faculty development, research or scholarly activities. Early evaluations indicate that this effort has been successful in increasing EBM content in the curricula at NCNM. This article describes the Vanguard Faculty program in an effort to share the successes and challenges of implementing a wide-ranging faculty development and curricular initiative at a complementary and alternative medicine institution.

  17. A Parallel Trade Study Architecture for Design Optimization of Complex Systems

    NASA Technical Reports Server (NTRS)

    Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.

  18. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    PubMed

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  19. Why do some women prefer submissive men? Hierarchically disparate couples reach higher reproductive success in European urban humans.

    PubMed

    Jozifkova, Eva; Konvicka, Martin; Flegr, Jaroslav

    2014-01-01

    Equality between partners is considering a feature of the functional partnerships in westernized societies. However, the evolutionary consequences of how in-pair hierarchy influences reproduction are less known. Attraction of some high-ranking women towards low-ranking men represents a puzzle. Young urban adults (120 men, 171 women) filled out a questionnaire focused on their sexual preference for higher or lower ranking partners, their future in-pair hierarchy, and hierarchy between their parents. Human pairs with a hierarchic disparity between partners conceive more offspring than pairs of equally-ranking individuals, who, in turn, conceive more offspring than pairs of two dominating partners. Importantly, the higher reproductive success of hierarchically disparate pairs holds, regardless of which sex, male or female, is the dominant one. In addition, the subjects preferring hierarchy disparity in partnerships were with greater probability sexually aroused by such disparity, suggesting that both the partnership preference and the triggers of sexual arousal may reflect a mating strategy. These results challenge the frequently held belief in within-pair equality as a trademark of functional partnerships. It rather appears that existence of some disparity improves within-pair cohesion, facilitating both cooperation between partners and improving the pairs' ability to face societal challenges. The parallel existence of submissivity-dominance hierarchies within human sexes allows for the parallel existence of alternative reproductive strategies, and may form a background for the diversity of mating systems observed in human societies. Arousal of overemphasized dominance/submissiveness may explain sadomasochistic sex, still little understood from the evolutionary psychology point of view.

  20. The management of new primary care organizations: an international perspective.

    PubMed

    Meads, Geoffrey; Wild, Andrea; Griffiths, Frances; Iwami, Michiyo; Moore, Phillipa

    2006-08-01

    Management practice arising from parallel policies for modernizing health systems is examined across a purposive sample of 16 countries. In each, novel organizational developments in primary care are a defining feature of the proposed future direction. Semistructured interviews with national leaders in primary care policy development and local service implementation indicate that management strategies, which effectively address the organized resistance of medical professions to modernizing policies, have these four consistent characteristics: extended community and patient participation models; national frameworks for interprofessional education and representation; mechanisms for multiple funding and accountabilities; and the diversification of non-governmental organizations and their roles. The research, based on a two-year fieldwork programme, indicates that at the meso-level of management planning and practice, there is a considerable potential for exchange and transferable learning between previously unconnected countries. The effectiveness of management strategies abroad, for example, in contexts where for the first time alternative but comparable new primary care organizations are exercising responsibilities for local resource utilization, may be understood through the application of stakeholder analyses, such as those employed to promote parity of relationships in NHS primary care trusts.

  1. Electric-Field-Directed Parallel Alignment Architecting 3D Lithium-Ion Pathways within Solid Composite Electrolyte.

    PubMed

    Liu, Xueqing; Peng, Sha; Gao, Shuyu; Cao, Yuancheng; You, Qingliang; Zhou, Liyong; Jin, Yongcheng; Liu, Zhihong; Liu, Jiyan

    2018-05-09

    It is of great significance to seek high-performance solid electrolytes via a facile chemistry and simple process for meeting the requirements of solid batteries. Previous reports revealed that ion conducting pathways within ceramic-polymer composite electrolytes mainly occur at ceramic particles and the ceramic-polymer interface. Herein, one facile strategy toward ceramic particles' alignment and assembly induced by an external alternating-current (AC) electric field is presented. It was manifested by an in situ optical microscope that Li 1.3 Al 0.3 Ti 1.7 (PO 4 ) 3 particles and poly(ethylene glycol) diacrylate in poly(dimethylsiloxane) (LATP@PEGDA@PDMS) assembled into three-dimensional connected networks on applying an external AC electric field. Scanning electron microscopy revealed that the ceramic LATP particles aligned into a necklacelike assembly. Electrochemical impedance spectroscopy confirmed that the ionic conductivity of this necklacelike alignment was significantly enhanced compared to that of the random one. It was demonstrated that this facile strategy of applying an AC electric field can be a very effective approach for architecting three-dimensional lithium-ion conductive networks within solid composite electrolyte.

  2. Balancing Conflicting Requirements for Grid and Particle Decomposition in Continuum-Lagrangian Solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray

    2015-10-30

    The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less

  3. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  4. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  5. Efficient Implementation of Multigrid Solvers on Message-Passing Parrallel Systems

    NASA Technical Reports Server (NTRS)

    Lou, John

    1994-01-01

    We discuss our implementation strategies for finite difference multigrid partial differential equation (PDE) solvers on message-passing systems. Our target parallel architecture is Intel parallel computers: the Delta and Paragon system.

  6. Extensive training and hippocampus or striatum lesions: effect on place and response strategies.

    PubMed

    Jacobson, Tara K; Gruenbaum, Benjamin F; Markus, Etan J

    2012-02-01

    The hippocampus has been linked to spatial navigation and the striatum to response learning. The current study focuses on how these brain regions continue to interact when an animal is very familiar with the task and the environment and must continuously switch between navigation strategies. Rats were trained to solve a plus maze using a place or a response strategy on different trials within a testing session. A room cue (illumination) was used to indicate which strategy should be used on a given trial. After extensive training, animals underwent dorsal hippocampus, dorsal lateral striatum or sham lesions. As expected hippocampal lesions predominantly caused impairment on place but not response trials. Striatal lesions increased errors on both place and response trials. Competition between systems was assessed by determining error type. Pre-lesion and sham animals primarily made errors to arms associated with the wrong (alternative) strategy, this was not found after lesions. The data suggest a qualitative change in the relationship between hippocampal and striatal systems as a task is well learned. During acquisition the two systems work in parallel, competing with each other. After task acquisition, the two systems become more integrated and interdependent. The fact that with extensive training (as something becomes a "habit"), behaviors become dependent upon the dorsal lateral striatum has been previously shown. The current findings indicate that dorsal lateral striatum involvement occurs even when the behavior is spatial and continues to require hippocampal processing. Published by Elsevier Inc.

  7. Multivariable speed synchronisation for a parallel hybrid electric vehicle drivetrain

    NASA Astrophysics Data System (ADS)

    Alt, B.; Antritter, F.; Svaricek, F.; Schultalbers, M.

    2013-03-01

    In this article, a new drivetrain configuration of a parallel hybrid electric vehicle is considered and a novel model-based control design strategy is given. In particular, the control design covers the speed synchronisation task during a restart of the internal combustion engine. The proposed multivariable synchronisation strategy is based on feedforward and decoupled feedback controllers. The performance and the robustness properties of the closed-loop system are illustrated by nonlinear simulation results.

  8. Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph; Krishna, Lala; Gute, Douglas

    1997-01-01

    Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.

  9. Autocratic strategies for alternating games.

    PubMed

    McAvoy, Alex; Hauert, Christoph

    2017-02-01

    Repeated games have a long tradition in the behavioral sciences and evolutionary biology. Recently, strategies were discovered that permit an unprecedented level of control over repeated interactions by enabling a player to unilaterally enforce linear constraints on payoffs. Here, we extend this theory of "zero-determinant" (or, more generally, "autocratic") strategies to alternating games, which are often biologically more relevant than traditional synchronous games. Alternating games naturally result in asymmetries between players because the first move matters or because players might not move with equal probabilities. In a strictly-alternating game with two players, X and Y, we give conditions for the existence of autocratic strategies for player X when (i) X moves first and (ii) Y moves first. Furthermore, we show that autocratic strategies exist even for (iii) games with randomly-alternating moves. Particularly important categories of autocratic strategies are extortionate and generous strategies, which enforce unfavorable and favorable outcomes for the opponent, respectively. We illustrate these strategies using the continuous Donation Game, in which a player pays a cost to provide a benefit to the opponent according to a continuous cooperative investment level. Asymmetries due to alternating moves could easily arise from dominance hierarchies, and we show that they can endow subordinate players with more autocratic strategies than dominant players. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Introducing Differential Equations Students to the Fredholm Alternative--In Staggered Doses

    ERIC Educational Resources Information Center

    Savoye, Philippe

    2011-01-01

    The development, in an introductory differential equations course, of boundary value problems in parallel with initial value problems and the Fredholm Alternative. Examples are provided of pairs of homogeneous and nonhomogeneous boundary value problems for which existence and uniqueness issues are considered jointly. How this heightens students'…

  11. Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit

    We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy formore » switch interleaving of paralleled dc-dc buck converters.« less

  12. Construction and comparison of parallel implicit kinetic solvers in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Titarev, Vladimir; Dumbser, Michael; Utyuzhnikov, Sergey

    2014-01-01

    The paper is devoted to the further development and systematic performance evaluation of a recent deterministic framework Nesvetay-3D for modelling three-dimensional rarefied gas flows. Firstly, a review of the existing discretization and parallelization strategies for solving numerically the Boltzmann kinetic equation with various model collision integrals is carried out. Secondly, a new parallelization strategy for the implicit time evolution method is implemented which improves scaling on large CPU clusters. Accuracy and scalability of the methods are demonstrated on a pressure-driven rarefied gas flow through a finite-length circular pipe as well as an external supersonic flow over a three-dimensional re-entry geometry of complicated aerodynamic shape.

  13. Bilingual parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less

  14. 77 FR 41370 - Proposed Information Collection; Comment Request; 2013 Alternative Contact Strategy Test

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-13

    ... Alternative Contact Strategy Test AGENCY: U.S. Census Bureau, Commerce. ACTION: Notice. SUMMARY: The...-response. This research will be conducted through a series of projects and tests throughout the decade... 2013 Alternative Contact Strategy Test is the first test to support this research. The Census Bureau...

  15. Developing Local Lifelong Guidance Strategies.

    ERIC Educational Resources Information Center

    Watts, A. G.; Hawthorn, Ruth; Hoffbrand, Jill; Jackson, Heather; Spurling, Andrea

    1997-01-01

    Outlines the background, rationale, methodology, and outcomes of developing local lifelong guidance strategies in four geographic areas. Analyzes the main components of the strategies developed and addresses a number of issues relating to the process of strategy development. Explores implications for parallel work in other localities. (RJM)

  16. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  17. 3D Data Denoising via Nonlocal Means Filter by Using Parallel GPU Strategies

    PubMed Central

    Cuomo, Salvatore; De Michele, Pasquale; Piccialli, Francesco

    2014-01-01

    Nonlocal Means (NLM) algorithm is widely considered as a state-of-the-art denoising filter in many research fields. Its high computational complexity leads researchers to the development of parallel programming approaches and the use of massively parallel architectures such as the GPUs. In the recent years, the GPU devices had led to achieving reasonable running times by filtering, slice-by-slice, and 3D datasets with a 2D NLM algorithm. In our approach we design and implement a fully 3D NonLocal Means parallel approach, adopting different algorithm mapping strategies on GPU architecture and multi-GPU framework, in order to demonstrate its high applicability and scalability. The experimental results we obtained encourage the usability of our approach in a large spectrum of applicative scenarios such as magnetic resonance imaging (MRI) or video sequence denoising. PMID:25045397

  18. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  19. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  20. Flow visualization in radial flow through stationary and corotating parallel disks

    NASA Astrophysics Data System (ADS)

    Mochizuki, S.; Tanaka, M.; Yang, Wen-Jei

    Paraffin mist is used here as a tracer to observe the patterns in the radial flow through both stationary and corotating parallel disks. The periodic and alternative generation of separation bubbles on both disks and the resulting flow fluctuation and turbulent flow in the radial channel are studied. Stall cells are visualized around the outer rim of the corotating disks.

  1. Combining Different Conceptual Change Methods within Four-Step Constructivist Teaching Model: A Sample Teaching of Series and Parallel Circuits

    ERIC Educational Resources Information Center

    Ipek, Hava; Calik, Muammer

    2008-01-01

    Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…

  2. Cloud-Coffee: implementation of a parallel consistency-based multiple alignment algorithm in the T-Coffee package and its benchmarking on the Amazon Elastic-Cloud.

    PubMed

    Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric

    2010-08-01

    We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html

  3. Method and apparatus for fabrication of high gradient insulators with parallel surface conductors spaced less than one millimeter apart

    DOEpatents

    Sanders, David M.; Decker, Derek E.

    1999-01-01

    Optical patterns and lithographic techniques are used as part of a process to embed parallel and evenly spaced conductors in the non-planar surfaces of an insulator to produce high gradient insulators. The approach extends the size that high gradient insulating structures can be fabricated as well as improves the performance of those insulators by reducing the scale of the alternating parallel lines of insulator and conductor along the surface. This fabrication approach also substantially decreases the cost required to produce high gradient insulators.

  4. Simulating the Effects of Alternative Forest Management Strategies on Landscape Structure

    Treesearch

    Eric J. Gustafson; Thomas Crow

    1996-01-01

    Quantitative, spatial tools are needed to assess the long-term spatial consequences of alternative management strategies for land use planning and resource management. We constructed a timber harvest allocation model (HARVEST) that provides a visual and quantitative means to predict the spatial pattern of forest openings produced by alternative harvest strategies....

  5. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  6. Distributed and parallel Ada and the Ada 9X recommendations

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.

    1992-01-01

    Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.

  7. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  8. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  9. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    PubMed Central

    Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.

    2017-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741

  10. Quantum Iterative Deepening with an Application to the Halting Problem

    PubMed Central

    Tarrataca, Luís; Wichert, Andreas

    2013-01-01

    Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465

  11. Variably-saturated groundwater modeling for optimizing managed aquifer recharge using trench infiltration

    USGS Publications Warehouse

    Heilweil, Victor M.; Benoit, Jerome; Healy, Richard W.

    2015-01-01

    Spreading-basin methods have resulted in more than 130 million cubic meters of recharge to the unconfined Navajo Sandstone of southern Utah in the past decade, but infiltration rates have slowed in recent years because of reduced hydraulic gradients and clogging. Trench infiltration is a promising alternative technique for increasing recharge and minimizing evaporation. This paper uses a variably saturated flow model to further investigate the relative importance of the following variables on rates of trench infiltration to unconfined aquifers: saturated hydraulic conductivity, trench spacing and dimensions, initial water-table depth, alternate wet/dry periods, and number of parallel trenches. Modeling results showed (1) increased infiltration with higher hydraulic conductivity, deeper initial water tables, and larger spacing between parallel trenches, (2) deeper or wider trenches do not substantially increase infiltration, (3) alternating wet/dry periods result in less overall infiltration than keeping the trenches continuously full, and (4) larger numbers of parallel trenches within a fixed area increases infiltration but with a diminishing effect as trench spacing becomes tighter. An empirical equation for estimating expected trench infiltration rates as a function of hydraulic conductivity and initial water-table depth was derived and can be used for evaluating feasibility of trench infiltration in other hydrogeologic settings

  12. Thin film metallic sensors in an alternating magnetic field for magnetic nanoparticle hyperthermia cancer therapy

    NASA Astrophysics Data System (ADS)

    Hussein, Z. A.; Boekelheide, Z.

    In magnetic nanoparticle hyperthermia in an alternating magnetic field for cancer therapy, it is important to monitor the temperature in situ. This can be done optically or electrically, but electronic measurements can be problematic because conducting parts heat up in a changing magnetic field. Microfabricated thin film sensors may be advantageous because eddy current heating is a function of size, and are promising for further miniaturization of sensors and fabrication of arrays of sensors. Thin films could also be used for in situ magnetic field sensors or for strain sensors. For a proof of concept, we fabricated a metallic thin film resistive thermometer by photolithographically patterning a 500Å Au/100Å Cr thin film on a glass substrate. Measurements were taken in a solenoidal coil supplying 0.04 T (rms) at 235 kHz with the sensor parallel and perpendicular to the magnetic field. In the parallel orientation, the resistive thermometer mirrored the background heating from the coil, while in the perpendicular orientation self-heating was observed due to eddy current heating of the conducting elements by Faraday's law. This suggests that metallic thin film sensors can be used in an alternating magnetic field, parallel to the field, with no significant self-heating.

  13. Airfoil-based electromagnetic energy harvester containing parallel array motion between moving coil and multi-pole magnets towards enhanced power density.

    PubMed

    Leung, Chung Ming; Wang, Ya; Chen, Wusi

    2016-11-01

    In this letter, the airfoil-based electromagnetic energy harvester containing parallel array motion between moving coil and trajectory matching multi-pole magnets was investigated. The magnets were aligned in an alternatively magnetized formation of 6 magnets to explore enhanced power density. In particular, the magnet array was positioned in parallel to the trajectory of the tip coil within its tip deflection span. The finite element simulations of the magnetic flux density and induced voltages at an open circuit condition were studied to find the maximum number of alternatively magnetized magnets that was required for the proposed energy harvester. Experimental results showed that the energy harvester with a pair of 6 alternatively magnetized linear magnet arrays was able to generate an induced voltage (V o ) of 20 V, with an open circuit condition, and 475 mW, under a 30 Ω optimal resistance load operating with the wind speed (U) at 7 m/s and a natural bending frequency of 3.54 Hz. Compared to the traditional electromagnetic energy harvester with a single magnet moving through a coil, the proposed energy harvester, containing multi-pole magnets and parallel array motion, enables the moving coil to accumulate a stronger magnetic flux in each period of the swinging motion. In addition to the comparison made with the airfoil-based piezoelectric energy harvester of the same size, our proposed electromagnetic energy harvester generates 11 times more power output, which is more suitable for high-power-density energy harvesting applications at regions with low environmental frequency.

  14. Decentralized Interleaving of Paralleled Dc-Dc Buck Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit

    We present a decentralized control strategy that yields switch interleaving among parallel-connected dc-dc buck converters. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform and no communication between different controllers is needed. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work presents themore » first fully decentralized strategy for switch interleaving in paralleled dc-dc buck converters.« less

  15. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  16. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.

    1994-01-01

    Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.

  17. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  18. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  19. A divide and conquer approach to the nonsymmetric eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1991-01-01

    Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.

  20. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  1. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  2. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Lin, Lin

    2017-10-01

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  3. Multivariate curve resolution based chromatographic peak alignment combined with parallel factor analysis to exploit second-order advantage in complex chromatographic measurements.

    PubMed

    Parastar, Hadi; Akvan, Nadia

    2014-03-13

    In the present contribution, a new combination of multivariate curve resolution-correlation optimized warping (MCR-COW) with trilinear parallel factor analysis (PARAFAC) is developed to exploit second-order advantage in complex chromatographic measurements. In MCR-COW, the complexity of the chromatographic data is reduced by arranging the data in a column-wise augmented matrix, analyzing using MCR bilinear model and aligning the resolved elution profiles using COW in a component-wise manner. The aligned chromatographic data is then decomposed using trilinear model of PARAFAC in order to exploit pure chromatographic and spectroscopic information. The performance of this strategy is evaluated using simulated and real high-performance liquid chromatography-diode array detection (HPLC-DAD) datasets. The obtained results showed that the MCR-COW can efficiently correct elution time shifts of target compounds that are completely overlapped by coeluted interferences in complex chromatographic data. In addition, the PARAFAC analysis of aligned chromatographic data has the advantage of unique decomposition of overlapped chromatographic peaks to identify and quantify the target compounds in the presence of interferences. Finally, to confirm the reliability of the proposed strategy, the performance of the MCR-COW-PARAFAC is compared with the frequently used methods of PARAFAC, COW-PARAFAC, multivariate curve resolution-alternating least squares (MCR-ALS), and MCR-COW-MCR. In general, in most of the cases the MCR-COW-PARAFAC showed an improvement in terms of lack of fit (LOF), relative error (RE) and spectral correlation coefficients in comparison to the PARAFAC, COW-PARAFAC, MCR-ALS and MCR-COW-MCR results. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory.

    PubMed

    Jia, Weile; Lin, Lin

    2017-10-14

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  5. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  6. Implementation of a parallel unstructured Euler solver on the CM-5

    NASA Technical Reports Server (NTRS)

    Morano, Eric; Mavriplis, D. J.

    1995-01-01

    An efficient unstructured 3D Euler solver is parallelized on a Thinking Machine Corporation Connection Machine 5, distributed memory computer with vectoring capability. In this paper, the single instruction multiple data (SIMD) strategy is employed through the use of the CM Fortran language and the CMSSL scientific library. The performance of the CMSSL mesh partitioner is evaluated and the overall efficiency of the parallel flow solver is discussed.

  7. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  8. Phased array ghost elimination.

    PubMed

    Kellman, Peter; McVeigh, Elliot R

    2006-05-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.

  9. Phased array ghost elimination

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636

  10. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    DOE PAGES

    Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...

    2015-01-01

    This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less

  11. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  12. Economical launching and accelerating control strategy for a single-shaft parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Song, Jian; Li, Liang; Li, Shengbo; Cao, Dongpu

    2016-08-01

    This paper presents an economical launching and accelerating mode, including four ordered phases: pure electrical driving, clutch engagement and engine start-up, engine active charging, and engine driving, which can be fit for the alternating conditions and improve the fuel economy of hybrid electric bus (HEB) during typical city-bus driving scenarios. By utilizing the fast response feature of electric motor (EM), an adaptive controller for EM is designed to realize the power demand during the pure electrical driving mode, the engine starting mode and the engine active charging mode. Concurrently, the smoothness issue induced by the sequential mode transitions is solved with a coordinated control logic for engine, EM and clutch. Simulation and experimental results show that the proposed launching and accelerating mode and its control methods are effective in improving the fuel economy and ensure the drivability during the fast transition between the operation modes of HEB.

  13. Effects of Time between Trials on Rats' and Pigeons' Choices with Probabilistic Delayed Reinforcers

    ERIC Educational Resources Information Center

    Mazur, James E.; Biondi, Dawn R.

    2011-01-01

    Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the…

  14. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  15. The brain-derived neurotrophic factor Val66Met polymorphism is associated with reduced functional magnetic resonance imaging activity in the hippocampus and increased use of caudate nucleus-dependent strategies in a human virtual navigation task

    PubMed Central

    Banner, Harrison; Bhat, Venkataramana; Etchamendy, Nicole; Joober, Ridha; Bohbot, Véronique D

    2011-01-01

    Multiple memory systems are involved in parallel processing of spatial information during navigation. A series of studies have distinguished between hippocampus-dependent ‘spatial’ navigation, which relies on knowledge of the relationship between landmarks in one’s environment to build a cognitive map, and habit-based ‘response’ learning, which requires the memorization of a series of actions and is mediated by the caudate nucleus. Studies have demonstrated that people spontaneously use one of these two alternative navigational strategies with almost equal frequency to solve a given navigation task, and that strategy correlates with functional magnetic resonance imaging (fMRI) activity and grey matter density. Although there is evidence for experience modulating grey matter in the hippocampus, genetic contributions may also play an important role in the hippocampus and caudate nucleus. Recently, the Val66Met polymorphism of the brain-derived neurotrophic factor (BDNF) gene has emerged as a possible inhibitor of hippocampal function. We have investigated the role of the BDNF Val66Met polymorphism on virtual navigation behaviour and brain activation during an fMRI navigation task. Our results demonstrate a genetic contribution to spontaneous strategies, where ‘Met’ carriers use a response strategy more frequently than individuals homozygous for the ‘Val’ allele. Additionally, we found increased hippocampal activation in the Val group relative to the Met group during performance of a virtual navigation task. Our results support the idea that the BDNF gene with the Val66Met polymorphism is a novel candidate gene involved in determining spontaneous strategies during navigation behaviour. PMID:21255124

  16. A Model for Speedup of Parallel Programs

    DTIC Science & Technology

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  17. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  18. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  19. Nonlinear and parallel algorithms for finite element discretizations of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Arteaga, Santiago Egido

    1998-12-01

    The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.

  20. Photovoltaic cell array

    NASA Technical Reports Server (NTRS)

    Eliason, J. T. (Inventor)

    1976-01-01

    A photovoltaic cell array consisting of parallel columns of silicon filaments is described. Each fiber is doped to produce an inner region of one polarity type and an outer region of an opposite polarity type to thereby form a continuous radial semi conductor junction. Spaced rows of electrical contacts alternately connect to the inner and outer regions to provide a plurality of electrical outputs which may be combined in parallel or in series.

  1. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  2. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    PubMed

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrisochoides, N.; Sukup, F.

    In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less

  4. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  5. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  6. TARGETED CAPTURE IN EVOLUTIONARY AND ECOLOGICAL GENOMICS

    PubMed Central

    Jones, Matthew R.; Good, Jeffrey M.

    2016-01-01

    The rapid expansion of next-generation sequencing has yielded a powerful array of tools to address fundamental biological questions at a scale that was inconceivable just a few years ago. Various genome partitioning strategies to sequence select subsets of the genome have emerged as powerful alternatives to whole genome sequencing in ecological and evolutionary genomic studies. High throughput targeted capture is one such strategy that involves the parallel enrichment of pre-selected genomic regions of interest. The growing use of targeted capture demonstrates its potential power to address a range of research questions, yet these approaches have yet to expand broadly across labs focused on evolutionary and ecological genomics. In part, the use of targeted capture has been hindered by the logistics of capture design and implementation in species without established reference genomes. Here we aim to 1) increase the accessibility of targeted capture to researchers working in non-model taxa by discussing capture methods that circumvent the need of a reference genome, 2) highlight the evolutionary and ecological applications where this approach is emerging as a powerful sequencing strategy, and 3) discuss the future of targeted capture and other genome partitioning approaches in light of the increasing accessibility of whole genome sequencing. Given the practical advantages and increasing feasibility of high-throughput targeted capture, we anticipate an ongoing expansion of capture-based approaches in evolutionary and ecological research, synergistic with an expansion of whole genome sequencing. PMID:26137993

  7. A Simple Chamber for Long-term Confocal Imaging of Root and Hypocotyl Development.

    PubMed

    Kirchhelle, Charlotte; Moore, Ian

    2017-05-17

    Several aspects of plant development, such as lateral root morphogenesis, occur on time spans of several days. To study underlying cellular and subcellular processes, high resolution time-lapse microscopy strategies that preserve physiological conditions are required. Plant tissues must have adequate nutrient and water supply with sustained gaseous exchange but, when submerged and immobilized under a coverslip, they are particularly susceptible to anoxia. One strategy that has been successfully employed is the use of a perfusion system to maintain a constant supply of oxygen and nutrients. However, such arrangements can be complicated, cumbersome, and require specialized equipment. Presented here is an alternative strategy for a simple imaging system using perfluorodecalin as an immersion medium. This system is easy to set up, requires minimal equipment, and is easily mounted on a microscope stage, allowing several imaging chambers to be set up and imaged in parallel. In this system, lateral root growth rates are indistinguishable from growth rates under standard conditions on agar plates for the first two days, and lateral root growth continues at reduced rates for at least another day. Plant tissues are supplied with nutrients via an agar slab that can be used also to administer a range of pharmacological compounds. The system was established to monitor lateral root development but is readily adaptable to image other plant organs such as hypocotyls and primary roots.

  8. Search asymmetries: parallel processing of uncertain sensory information.

    PubMed

    Vincent, Benjamin T

    2011-08-01

    What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The application of parallel wells to support the use of groundwater for sustainable irrigation

    NASA Astrophysics Data System (ADS)

    Suhardi

    2018-05-01

    The use of groundwater as a source of irrigation is one alternative in meeting water needs of plants. Using groundwater for irrigation requires a high cost because of the discharge that can be taken is limited. In addition, the use of large groundwater can cause environmental damage and social conflict. To minimize costs, maintain quality of the environment and to prevent social conflicts, it is necessary to innovate in the groundwater taking system. The study was conducted with an innovation of using parallel wells. Performance is measured by comparing parallel wells with a single well. The results showed that the use of parallel wells to meet the water needs of rice plants and increase the pump discharge up to 100%. In addition, parallel wells can reduce the influence radius of taking of groundwater compared to single well so as to prevent social conflict. Thus, the use of parallel wells can support the achievement of the use of groundwater for sustainable irrigation.

  10. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    PubMed

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  11. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  12. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  13. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  14. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  15. Optimistic barrier synchronization

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1992-01-01

    Barrier synchronization is fundamental operation in parallel computation. In many contexts, at the point a processor enters a barrier it knows that it has already processed all the work required of it prior to synchronization. The alternative case, when a processor cannot enter a barrier with the assurance that it has already performed all the necessary pre-synchronization computation, is treated. The problem arises when the number of pre-sychronization messages to be received by a processor is unkown, for example, in a parallel discrete simulation or any other computation that is largely driven by an unpredictable exchange of messages. We describe an optimistic O(log sup 2 P) barrier algorithm for such problems, study its performance on a large-scale parallel system, and consider extensions to general associative reductions as well as associative parallel prefix computations.

  16. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  17. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching Yuen

    1991-01-01

    A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.

  18. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching-Yuen

    1992-01-01

    This paper adopts a new Lagrangian formulation of the Euler equation for the calculation of two dimensional supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, we have achieved better than six times speed-up on a 8192-processor CM-2 over a single processor of a CRAY-2.

  19. Parallel Harmony Search Based Distributed Energy Resource Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electricalmore » power distribution systems operation.« less

  20. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  1. Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Xiangyang, Xu; Siqi, Zhao; Peng, Dong

    2017-12-01

    A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.

  2. Dynamic Multiple Work Stealing Strategy for Flexible Load Balancing

    NASA Astrophysics Data System (ADS)

    Adnan; Sato, Mitsuhisa

    Lazy-task creation is an efficient method of overcoming the overhead of the grain-size problem in parallel computing. Work stealing is an effective load balancing strategy for parallel computing. In this paper, we present dynamic work stealing strategies in a lazy-task creation technique for efficient fine-grain task scheduling. The basic idea is to control load balancing granularity depending on the number of task parents in a stack. The dynamic-length strategy of work stealing uses run-time information, which is information on the load of the victim, to determine the number of tasks that a thief is allowed to steal. We compare it with the bottommost first work stealing strategy used in StackThread/MP, and the fixed-length strategy of work stealing, where a thief requests to steal a fixed number of tasks, as well as other multithreaded frameworks such as Cilk and OpenMP task implementations. The experiments show that the dynamic-length strategy of work stealing performs well in irregular workloads such as in UTS benchmarks, as well as in regular workloads such as Fibonacci, Strassen's matrix multiplication, FFT, and Sparse-LU factorization. The dynamic-length strategy works better than the fixed-length strategy because it is more flexible than the latter; this strategy can avoid load imbalance due to overstealing.

  3. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    PubMed

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  5. Parallel screening of drug-like natural compounds using Caco-2 cell permeability QSAR model with applicability domain, lipophilic ligand efficiency index and shape property: A case study of HIV-1 reverse transcriptase inhibitors

    NASA Astrophysics Data System (ADS)

    Patel, Rikin D.; Kumar, Sivakumar Prasanth; Patel, Chirag N.; Shankar, Shetty Shilpa; Pandya, Himanshu A.; Solanki, Hitesh A.

    2017-10-01

    The traditional drug design strategy centrally focuses on optimizing binding affinity with the receptor target and evaluates pharmacokinetic properties at a later stage which causes high rate of attrition in clinical trials. Alternatively, parallel screening allows evaluation of these properties and affinity simultaneously. In a case study to identify leads from natural compounds with experimental HIV-1 reverse transcriptase (RT) inhibition, we integrated various computational approaches including Caco-2 cell permeability QSAR model with applicability domain (AD) to recognize drug-like natural compounds, molecular docking to study HIV-1 RT interactions and shape similarity analysis with known crystal inhibitors having characteristic butterfly-like model. Further, the lipophilic properties of the compounds refined from the process with best scores were examined using lipophilic ligand efficiency (LLE) index. Seven natural compound hits viz. baicalien, (+)-calanolide A, mniopetal F, fagaronine chloride, 3,5,8-trihydroxy-4-quinolone methyl ether derivative, nitidine chloride and palmatine, were prioritized based on LLE score which demonstrated Caco-2 well absorption labeling, encompassment in AD structural coverage, better receptor affinity, shape adaptation and permissible AlogP value. We showed that this integrative approach is successful in lead exploration of natural compounds targeted against HIV-1 RT enzyme.

  6. The WELL Strategy. Workforce Education & Lifelong Learning for Education and Economic Reform.

    ERIC Educational Resources Information Center

    San Diego Community Coll. District, CA.

    National concerns linking education and economic development have been stated in "America 2000: An Education Strategy." The America 2000 strategy represents the direction to educational and economic reform in a metaphor of four trains leaving a station on four parallel tracks. However, this misses the point that the tracks are actually…

  7. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  8. Scalable multi-objective control for large scale water resources systems under uncertainty

    NASA Astrophysics Data System (ADS)

    Giuliani, Matteo; Quinn, Julianne; Herman, Jonathan; Castelletti, Andrea; Reed, Patrick

    2016-04-01

    The use of mathematical models to support the optimal management of environmental systems is rapidly expanding over the last years due to advances in scientific knowledge of the natural processes, efficiency of the optimization techniques, and availability of computational resources. However, undergoing changes in climate and society introduce additional challenges for controlling these systems, ultimately motivating the emergence of complex models to explore key causal relationships and dependencies on uncontrolled sources of variability. In this work, we contribute a novel implementation of the evolutionary multi-objective direct policy search (EMODPS) method for controlling environmental systems under uncertainty. The proposed approach combines direct policy search (DPS) with hierarchical parallelization of multi-objective evolutionary algorithms (MOEAs) and offers a threefold advantage: the DPS simulation-based optimization can be combined with any simulation model and does not add any constraint on modeled information, allowing the use of exogenous information in conditioning the decisions. Moreover, the combination of DPS and MOEAs prompts the generation or Pareto approximate set of solutions for up to 10 objectives, thus overcoming the decision biases produced by cognitive myopia, where narrow or restrictive definitions of optimality strongly limit the discovery of decision relevant alternatives. Finally, the use of large-scale MOEAs parallelization improves the ability of the designed solutions in handling the uncertainty due to severe natural variability. The proposed approach is demonstrated on a challenging water resources management problem represented by the optimal control of a network of four multipurpose water reservoirs in the Red River basin (Vietnam). As part of the medium-long term energy and food security national strategy, four large reservoirs have been constructed on the Red River tributaries, which are mainly operated for hydropower production, flood control, and water supply. Numerical results under historical as well as synthetically generated hydrologic conditions show that our approach is able to discover key system tradeoffs in the operations of the system. The ability of the algorithm to find near-optimal solutions increases with the number of islands in the adopted hierarchical parallelization scheme. In addition, although significant performance degradation is observed when the solutions designed over history are re-evaluated over synthetically generated inflows, we successfully reduced these vulnerabilities by identifying alternative solutions that are more robust to hydrologic uncertainties, while also addressing the tradeoffs across the Red River multi-sector services.

  9. MC64-ClustalWP2: A Highly-Parallel Hybrid Strategy to Align Multiple Sequences in Many-Core Architectures

    PubMed Central

    Díaz, David; Esteban, Francisco J.; Hernández, Pilar; Caballero, Juan Antonio; Guevara, Antonio

    2014-01-01

    We have developed the MC64-ClustalWP2 as a new implementation of the Clustal W algorithm, integrating a novel parallelization strategy and significantly increasing the performance when aligning long sequences in architectures with many cores. It must be stressed that in such a process, the detailed analysis of both the software and hardware features and peculiarities is of paramount importance to reveal key points to exploit and optimize the full potential of parallelism in many-core CPU systems. The new parallelization approach has focused into the most time-consuming stages of this algorithm. In particular, the so-called progressive alignment has drastically improved the performance, due to a fine-grained approach where the forward and backward loops were unrolled and parallelized. Another key approach has been the implementation of the new algorithm in a hybrid-computing system, integrating both an Intel Xeon multi-core CPU and a Tilera Tile64 many-core card. A comparison with other Clustal W implementations reveals the high-performance of the new algorithm and strategy in many-core CPU architectures, in a scenario where the sequences to align are relatively long (more than 10 kb) and, hence, a many-core GPU hardware cannot be used. Thus, the MC64-ClustalWP2 runs multiple alignments more than 18x than the original Clustal W algorithm, and more than 7x than the best x86 parallel implementation to date, being publicly available through a web service. Besides, these developments have been deployed in cost-effective personal computers and should be useful for life-science researchers, including the identification of identities and differences for mutation/polymorphism analyses, biodiversity and evolutionary studies and for the development of molecular markers for paternity testing, germplasm management and protection, to assist breeding, illegal traffic control, fraud prevention and for the protection of the intellectual property (identification/traceability), including the protected designation of origin, among other applications. PMID:24710354

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiser, C.; Herdies, L.; McIntosh, L.

    Higher plant mitochondria posses a cyanide-resistant, hydroxamate-sensitive alternative pathway of electron transport that does not conserve energy. Aging of potato tuber slices for 24 hours leads to the development of an alternative pathway capacity. We have shown that a monoclonal antibody raised against the alternative pathway terminal oxidase of Sauromatum guttatum crossreacts with a protein of similar size in aged potato slice mitochondria. This protein was partially purified and characterized by two-dimensional gel electrophoresis, and its relative levels parallel the rise in cyanide-resistant respiration. We are using a putative clone of the S. guttatum alternative oxidase gene to isolate themore » equivalent gene from potato and to examine its expression.« less

  11. Multi Criteria Decision Making to evaluate control strategies of contagious animal diseases.

    PubMed

    Mourits, M C M; van Asseldonk, M A P M; Huirne, R B M

    2010-09-01

    The decision on which strategy to use in the control of contagious animal diseases involves complex trade-offs between multiple objectives. This paper describes a Multi Criteria Decision Making (MCDM) application to illustrate its potential support to policy makers in choosing the control strategy that best meets all of the conflicting interests. The presented application focused on the evaluation of alternative strategies to control Classical Swine Fever (CSF) epidemics within the European Union (EU) according to the preferences of the European Chief Veterinary Officers (CVO). The performed analysis was centred on the three high-level objectives of epidemiology, economics and social ethics. The appraised control alternatives consisted of the EU compulsory control strategy, a pre-emptive slaughter strategy, a protective vaccination strategy and a suppressive vaccination strategy. Using averaged preference weights of the elicited CVOs, the preference ranking of the control alternatives was determined for six EU regions. The obtained results emphasized the need for EU region-specific control. Individual CVOs differed in their views on the relative importance of the various (sub)criteria by which the performance of the alternatives were judged. Nevertheless, the individual rankings of the control alternatives within a region appeared surprisingly similar. Based on the results of the described application it was concluded that the structuring feature of the MCDM technique provides a suitable tool in assisting the complex decision making process of controlling contagious animal diseases. 2010 Elsevier B.V. All rights reserved.

  12. Enhancing membrane protein subcellular localization prediction by parallel fusion of multi-view features.

    PubMed

    Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu

    2012-12-01

    Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.

  13. Effects of a parallel resistor on electrical characteristics of a piezoelectric transformer in open-circuit transient state.

    PubMed

    Chang, Kuo-Tsai

    2007-01-01

    This paper investigates electrical transient characteristics of a Rosen-type piezoelectric transformer (PT), including maximum voltages, time constants, energy losses and average powers, and their improvements immediately after turning OFF. A parallel resistor connected to both input terminals of the PT is needed to improve the transient characteristics. An equivalent circuit for the PT is first given. Then, an open-circuit voltage, involving a direct current (DC) component and an alternating current (AC) component, and its related energy losses are derived from the equivalent circuit with initial conditions. Moreover, an AC power control system, including a DC-to-AC resonant inverter, a control switch and electronic instruments, is constructed to determine the electrical characteristics of the OFF transient state. Furthermore, the effects of the parallel resistor on the transient characteristics at different parallel resistances are measured. The advantages of adding the parallel resistor also are discussed. From the measured results, the DC time constant is greatly decreased from 9 to 0.04 ms by a 10 k(omega) parallel resistance under open output.

  14. An object-oriented approach to nested data parallelism

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.; Chatterjee, Siddhartha

    1994-01-01

    This paper describes an implementation technique for integrating nested data parallelism into an object-oriented language. Data-parallel programming employs sets of data called 'collections' and expresses parallelism as operations performed over the elements of a collection. When the elements of a collection are also collections, then there is the possibility for 'nested data parallelism.' Few current programming languages support nested data parallelism however. In an object-oriented framework, a collection is a single object. Its type defines the parallel operations that may be applied to it. Our goal is to design and build an object-oriented data-parallel programming environment supporting nested data parallelism. Our initial approach is built upon three fundamental additions to C++. We add new parallel base types by implementing them as classes, and add a new parallel collection type called a 'vector' that is implemented as a template. Only one new language feature is introduced: the 'foreach' construct, which is the basis for exploiting elementwise parallelism over collections. The strength of the method lies in the compilation strategy, which translates nested data-parallel C++ into ordinary C++. Extracting the potential parallelism in nested 'foreach' constructs is called 'flattening' nested parallelism. We show how to flatten 'foreach' constructs using a simple program transformation. Our prototype system produces vector code which has been successfully run on workstations, a CM-2, and a CM-5.

  15. An alternative low-loss stack topology for vanadium redox flow battery: Comparative assessment

    NASA Astrophysics Data System (ADS)

    Moro, Federico; Trovò, Andrea; Bortolin, Stefano; Del, Davide, , Col; Guarnieri, Massimo

    2017-02-01

    Two vanadium redox flow battery topologies have been compared. In the conventional series stack, bipolar plates connect cells electrically in series and hydraulically in parallel. The alternative topology consists of cells connected in parallel inside stacks by means of monopolar plates in order to reduce shunt currents along channels and manifolds. Channelled and flat current collectors interposed between cells were considered in both topologies. In order to compute the stack losses, an equivalent circuit model of a VRFB cell was built from a 2D FEM multiphysics numerical model based on Comsol®, accounting for coupled electrical, electrochemical, and charge and mass transport phenomena. Shunt currents were computed inside the cells with 3D FEM models and in the piping and manifolds by means of equivalent circuits solved with Matlab®. Hydraulic losses were computed with analytical models in piping and manifolds and with 3D numerical analyses based on ANSYS Fluent® in the cell porous electrodes. Total losses in the alternative topology resulted one order of magnitude lower than in an equivalent conventional battery. The alternative topology with channelled current collectors exhibits the lowest shunt currents and hydraulic losses, with round-trip efficiency higher by about 10%, as compared to the conventional topology.

  16. Ordered fast Fourier transforms on a massively parallel hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Tong, Charles; Swarztrauber, Paul N.

    1991-01-01

    The present evaluation of alternative, massively parallel hypercube processor-applicable designs for ordered radix-2 decimation-in-frequency FFT algorithms gives attention to the reduction of computation time-dominating communication. A combination of the order and computational phases of the FFT is accordingly employed, in conjunction with sequence-to-processor maps which reduce communication. Two orderings, 'standard' and 'cyclic', in which the order of the transform is the same as that of the input sequence, can be implemented with ease on the Connection Machine (where orderings are determined by geometries and priorities. A parallel method for trigonometric coefficient computation is presented which does not employ trigonometric functions or interprocessor communication.

  17. Alternative Strategies for Control of Sulfur Dioxide Emissions

    ERIC Educational Resources Information Center

    MacDonald, Bryce I.

    1975-01-01

    Achievement of air quality goals requires careful consideration of alternative control strategies in view of national concerns with energy and the economy. Three strategies which might be used by coal fired steam electric plants to achieve ambient air quality standards for sulfur dioxide have been compared and the analysis presented. (Author/BT)

  18. Parallel Leadership: A Clue to the Contents of the "Black Box" of School Reform.

    ERIC Educational Resources Information Center

    Andrews, Dorothy; Crowther, Frank

    2002-01-01

    Examined a conceptualization of teacher leadership (derived from a 1997 study) in a range of school reform case studies. Focused on the interactivity of teacher leaders and administrator leaders and generated a concept called "parallel leadership," a strategy that appears to illuminate ways in which school-based leadership may contribute to…

  19. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  20. Alternative Loglinear Smoothing Models and Their Effect on Equating Function Accuracy. Research Report. ETS RR-09-48

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2009-01-01

    This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…

  1. Linking linear programming and spatial simulation models to predict landscape effects of forest management alternatives

    Treesearch

    Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers

    2006-01-01

    Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...

  2. Parallel Reaction Monitoring: A Targeted Experiment Performed Using High Resolution and High Mass Accuracy Mass Spectrometry

    PubMed Central

    Rauniyar, Navin

    2015-01-01

    The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379

  3. Parallel k-means++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique.more » We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.« less

  4. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    PubMed Central

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.

    2014-01-01

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844

  5. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    DOE PAGES

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...

    2014-11-18

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less

  6. Ecotourism and Interpretation in Costa Rica: Parallels and Peregrinations.

    ERIC Educational Resources Information Center

    Williams, Wayne E.

    1994-01-01

    Discusses the ecotourism industry in Costa Rica and some of the problems faced by its national park system, including megaparks, rapid increase in tourism, and interpretive services. Suggests alternatives for the problems. (MKR)

  7. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  8. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  9. Implementing and analyzing the multi-threaded LP-inference

    NASA Astrophysics Data System (ADS)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  10. Dual compile strategy for parallel heterogeneous execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less

  11. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  12. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  13. Annual Screening Strategies in BRCA1 and BRCA2 Gene Mutation Carriers: A Comparative Effectiveness Analysis

    PubMed Central

    Lowry, Kathryn P.; Lee, Janie M.; Kong, Chung Y.; McMahon, Pamela M.; Gilmore, Michael E.; Cott Chubiz, Jessica E.; Pisano, Etta D.; Gatsonis, Constantine; Ryan, Paula D.; Ozanne, Elissa M.; Gazelle, G. Scott

    2011-01-01

    Background While breast cancer screening with mammography and MRI is recommended for BRCA mutation carriers, there is no current consensus on the optimal screening regimen. Methods We used a computer simulation model to compare six annual screening strategies [film mammography (FM), digital mammography (DM), FM and magnetic resonance imaging (MRI) or DM and MRI contemporaneously, and alternating FM/MRI or DM/MRI at six-month intervals] beginning at ages 25, 30, 35, and 40, and two strategies of annual MRI with delayed alternating DM/FM to clinical surveillance alone. Strategies were evaluated without and with mammography-induced breast cancer risk, using two models of excess relative risk. Input parameters were obtained from the medical literature, publicly available databases, and calibration. Results Without radiation risk effects, alternating DM/MRI starting at age 25 provided the highest life expectancy (BRCA1: 72.52 years, BRCA2: 77.63 years). When radiation risk was included, a small proportion of diagnosed cancers were attributable to radiation exposure (BRCA1: <2%, BRCA2: <4%). With radiation risk, alternating DM/MRI at age 25 or annual MRI at age 25/delayed alternating DM at age 30 were most effective, depending on the radiation risk model used. Alternating DM/MRI starting at age 25 also had the highest number of false-positive screens/person (BRCA1: 4.5, BRCA2: 8.1). Conclusions Annual MRI at 25/delayed alternating DM at age 30 is likely the most effective screening strategy in BRCA mutation carriers. Screening benefits, associated risks and personal acceptance of false-positive results, should be considered in choosing the optimal screening strategy for individual women. PMID:21935911

  14. Parallel steady state studies on a milliliter scale accelerate fed-batch bioprocess design for recombinant protein production with Escherichia coli.

    PubMed

    Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk

    2016-11-01

    In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.

  15. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  16. Evaluation of P1'-diversified phosphinic peptides leads to the development of highly selective inhibitors of MMP-11.

    PubMed

    Matziari, Magdalini; Beau, Fabrice; Cuniasse, Philippe; Dive, Vincent; Yiotakis, Athanasios

    2004-01-15

    Phosphinic peptides were previously reported to be potent inhibitors of several matrixins (MMPs). To identify more selective inhibitors of MMP-11, a matrixin overexpressed in breast cancer, a series of phosphinic pseudopeptides bearing a variety of P(1)'-side chains has been synthesized, by parallel diversification of a phosphinic template. The potencies of these compounds were evaluated against a set of seven MMPs (MMP-2, MMP-7, MMP-8, MMP-9, MMP-11, MMP-13, and MMP-14). The chemical strategy applied led to the identification of several phosphinic inhibitors displaying high selectivity toward MMP-11. One of the most selective inhibitors of MMP-11 in this series, compound 22, exhibits a K(i) value of 0.23 microM toward MMP-11, while its potency toward the other MMPs tested is 2 orders of magnitude lower. This remarkable selectivity may rely on interactions of the P(1)'-side chain atoms of these inhibitors with residues located at the entrance of the S(1)'-cavity of MMP-11. The design of inhibitors able to interact with residues located at the entrance of MMPs' S(1)'-cavity might represent an alternative strategy to identify selective inhibitors that will fully differentiate one MMP among the others.

  17. Margalef's mandala and phytoplankton bloom strategies

    NASA Astrophysics Data System (ADS)

    Wyatt, Timothy

    2014-03-01

    Margalef's mandala maps phytoplankton species into a phase space defined by turbulence (A) and nutrient concentrations (Ni); these are the hard axes. The permutations of high and low A and high and low Ni divide the space into four domains. Soft axes indicate some ecological dynamics. A main sequence shows the normal course of phytoplankton succession; the r-K axis of MacArthur and Wilson runs parallel to it. An alternative successional sequence leads to the low A-high Ni domain into which many red tide species are mapped. Astronomical and biological time are implicit. A mathematical transformation of the mandala (rotation) links it to the classical bloom models of Sverdrup (time) and Kierstead and Slobodkin (space). Both rarity and the propensity to form red tides are considered to be species characters, meaning that maximum population abundance can be a target of natural selection. Equally, both the unpredictable appearance of bloom species and their short-lived appearances may be species characters. There may be a correlation too between these features and long-lived dormant stages in the life-cycle; then the vegetative planktonic phase is the 'weak link' in the life-cycle. Red tides are thus due to species which have evolved suites of traits which result in specific demographic strategies.

  18. Alternative IT Sourcing Strategies: From the Campus to the Cloud. ECAR Key Findings

    ERIC Educational Resources Information Center

    Goldstein, Philip J.

    2009-01-01

    This document presents the key findings from the 2009 ECAR (EDUCAUSE Center for Applied Research) study, "Alternative IT Sourcing Strategies: From the Campus to the Cloud," by Philip J. Goldstein. The study explores a multitude of strategies used by colleges and university information technology organizations to deliver the breadth of technologies…

  19. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  20. The molecular biology of feline immunodeficiency virus (FIV).

    PubMed

    Kenyon, Julia C; Lever, Andrew M L

    2011-11-01

    Feline immunodeficiency virus (FIV) is widespread in feline populations and causes an AIDS-like illness in domestic cats. It is highly prevalent in several endangered feline species. In domestic cats FIV infection is a valuable small animal model for HIV infection. In recent years there has been sa significant increase in interest in FIV, in part to exploit this, but also because of the potential it has as a human gene therapy vector. Though much less studied than HIV there are many parallels in the replication of the two viruses, but also important differences and, despite their likely common origin, the viruses have in some cases used alternative strategies to overcome similar problems. Recent advances in understanding the structure and function of FIV RNA and proteins and their interactions has enhanced our knowledge of FIV replication significantly, however, there are still many gaps. This review summarizes our current knowledge of FIV molecular biology and its similarities with, and differences from, other lentiviruses.

  1. Performance of rapid tests and algorithms for HIV screening in Abidjan, Ivory Coast.

    PubMed

    Loukou, Y G; Cabran, M A; Yessé, Zinzendorf Nanga; Adouko, B M O; Lathro, S J; Agbessi-Kouassi, K B T

    2014-01-01

    Seven rapid diagnosis tests (RDTs) of HIV were evaluated by a panel group who collected serum samples from patients in Abidjan (HIV-1 = 203, HIV-2 = 25, HIV-dual = 25, HIV = 305). Kit performances were recorded after the reference techniques (enzyme-linked immunosorbent assay). The following RDTs showed a sensitivity of 100% and a specificity higher than 99%: Determine, Oraquick, SD Bioline, BCP, and Stat-Pak. These kits were used to establish infection screening strategies. The combination with 2 or 3 of these tests in series or parallel algorithms showed that series combinations with 2 tests (Oraquick and Bioline) and 3 tests (Determine, BCP, and Stat-Pak) gave the best performances (sensitivity, specificity, positive predictive value, and negative predictive value of 100%). However, the combination with 2 tests appeared to be more onerous than the combination with 3 tests. The combination with Determine, BCP, and Stat-Pak tests serving as a tiebreaker could be an alternative to the HIV/AIDS serological screening in Abidjan.

  2. Alternative stitching method for massively parallel e-beam lithography

    NASA Astrophysics Data System (ADS)

    Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume

    2015-03-01

    In this study a novel stitching method other than Soft Edge (SE) and Smart Boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced Exposure Latitude without cost of throughput, making use of the fact that the two beams that pass through the stitching region can deposit up to 2x the nominal dose. The method requires a complex Proximity Effect Correction that takes a preset stitching dose profile into account. On a Metal clip at minimum half-pitch of 32 nm for MAPPER FLX 1200 tool specifications, the novel stitching method effectively mitigates Beam to Beam (B2B) position errors such that they do not induce increase in CD Uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. 5 nm direct overlay impact from B2B position errors cannot be reduced by a stitching strategy.

  3. Nutrition and muscle catabolism in maintenance hemodialysis: does feeding make muscle cells selective self-eaters?

    PubMed

    Franch, Harold A

    2009-01-01

    Efforts to build muscle by increased protein feeding in hemodialysis patients have been thwarted by parallel increases in both muscle protein synthesis and degradation. The evidence suggests that muscle cells replace older proteins in response to feeding rather than using new proteins to drive muscle cell hypertrophy. This review presents the hypothesis that protein feeding provides an opportunity for muscle to accelerate proteolysis of proteins that have been damaged by oxidation, nitrosylation, and/or glycosylation and to replace damaged mitochondria that contribute to oxidative stress. Increases in proteolysis with feeding are driven by insulin resistance and the increased oxidative stress of mitochondrial respiration. Oxidized proteins and organelles are excellent substrates for degradation by the proteasome, macroautophagy, and chaperone-mediated autophagy: these systems of proteolysis seem to be activated by oxydatiative stress. Replacement of oxidized and other damaged proteins may be a benefit of protein feeding in hemodialysis, but alternative strategies, including exercise, will be required to build muscle.

  4. Nutrition and Muscle Catabolism in Maintenance Hemodialysis: Does Feeding Make Muscle Cells Selective Self-Eaters?

    PubMed Central

    Franch, Harold A.

    2009-01-01

    Efforts to build muscle by increased protein feeding in hemodialysis patients have been thwarted by parallel increases in both muscle protein synthesis and degradation. The evidence suggests that muscle cells replace older proteins in response to feeding rather than using new proteins to drive muscle cell hypertrophy. This review presents the hypothesis that protein feeding provides an opportunity for muscle to accelerate proteolysis of proteins which have been damaged by oxidation, nitrosylation and/or glycosylation and to replace damaged mitochondria that contribute to oxidative stress. Increases in proteolysis with feeding are driven by insulin resistance and the increased oxidative stress of mitochondrial respiration. Oxidized proteins and organelles are excellent substrates for degradation by the proteasome, macroautophagy, and chaperone-mediated autophagy: these systems of proteolysis seem to be activated by oxydatiative stress. Replacement of oxidized and other damaged proteins may be a benefit of protein feeding in hemodialysis, but alternative strategies, including exercise, will be required to build muscle. PMID:19121779

  5. Beyond union of subspaces: Subspace pursuit on Grassmann manifold for data representation

    DOE PAGES

    Shen, Xinyue; Krim, Hamid; Gu, Yuantao

    2016-03-01

    Discovering the underlying structure of a high-dimensional signal or big data has always been a challenging topic, and has become harder to tackle especially when the observations are exposed to arbitrary sparse perturbations. Here in this paper, built on the model of a union of subspaces (UoS) with sparse outliers and inspired by a basis pursuit strategy, we exploit the fundamental structure of a Grassmann manifold, and propose a new technique of pursuing the subspaces systematically by solving a non-convex optimization problem using the alternating direction method of multipliers. This problem as noted is further complicated by non-convex constraints onmore » the Grassmann manifold, as well as the bilinearity in the penalty caused by the subspace bases and coefficients. Nevertheless, numerical experiments verify that the proposed algorithm, which provides elegant solutions to the sub-problems in each step, is able to de-couple the subspaces and pursue each of them under time-efficient parallel computation.« less

  6. Iterative Importance Sampling Algorithms for Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less

  7. Management Development: A Need or a Luxury?

    ERIC Educational Resources Information Center

    Tasca, Anthony J.

    1975-01-01

    Focusing on management development, the article suggest alternative ways for training and development professionals to adapt to economic cycle-related periods of feast or famine. Diagnosis, personnel management strategy, measurement, and alternative strategies are topics considered. (MW)

  8. Parallel constraint satisfaction in memory-based decisions.

    PubMed

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  9. An integrated control strategy for the composite braking system of an electric vehicle with independently driven axles

    NASA Astrophysics Data System (ADS)

    Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang

    2016-08-01

    For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.

  10. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  11. Public and private health-care financing with alternate public rationing rules.

    PubMed

    Cuff, Katherine; Hurley, Jeremiah; Mestelman, Stuart; Muller, Andrew; Nuscheler, Robert

    2012-02-01

    We develop a model to analyze parallel public and private health-care financing under two alternative public sector rationing rules: needs-based rationing and random rationing. Individuals vary in income and severity of illness. There is a limited supply of health-care resources used to treat individuals, causing some individuals to go untreated. Insurers (both public and private) must bid to obtain the necessary health-care resources to treat their beneficiaries. Given individuals' willingnesses-to-pay for private insurance are increasing in income, the introduction of private insurance diverts treatment from relatively poor to relatively rich individuals. Further, the impact of introducing parallel private insurance depends on the rationing mechanism in the public sector. We show that the private health insurance market is smaller when the public sector rations according to need than when allocation is random. Copyright © 2010 John Wiley & Sons, Ltd.

  12. Motions of the hand expose the partial and parallel activation of stereotypes.

    PubMed

    Freeman, Jonathan B; Ambady, Nalini

    2009-10-01

    Perceivers spontaneously sort other people's faces into social categories and activate the stereotype knowledge associated with those categories. In the work described here, participants, presented with sex-typical and sex-atypical faces (i.e., faces containing a mixture of male and female features), identified which of two gender stereotypes (one masculine and one feminine) was appropriate for the face. Meanwhile, their hand movements were measured by recording the streaming x, y coordinates of the computer mouse. As participants stereotyped sex-atypical faces, real-time motor responses exhibited a continuous spatial attraction toward the opposite-gender stereotype. These data provide evidence for the partial and parallel activation of stereotypes belonging to alternate social categories. Thus, perceptual cues of the face can trigger a graded mixture of simultaneously active stereotype knowledge tied to alternate social categories, and this mixture settles over time onto ultimate judgments.

  13. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  14. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  15. Impact of greenhouse gas metrics on the quantification of agricultural emissions and farm-scale mitigation strategies: a New Zealand case study

    NASA Astrophysics Data System (ADS)

    Reisinger, Andy; Ledgard, Stewart

    2013-06-01

    Agriculture emits a range of greenhouse gases. Greenhouse gas metrics allow emissions of different gases to be reported in a common unit called CO2-equivalent. This enables comparisons of the efficiency of different farms and production systems and of alternative mitigation strategies across all gases. The standard metric is the 100 year global warming potential (GWP), but alternative metrics have been proposed and could result in very different CO2-equivalent emissions, particularly for CH4. While significant effort has been made to reduce uncertainties in emissions estimates of individual gases, little effort has been spent on evaluating the implications of alternative metrics on overall agricultural emissions profiles and mitigation strategies. Here we assess, for a selection of New Zealand dairy farms, the effect of two alternative metrics (100 yr GWP and global temperature change potentials, GTP) on farm-scale emissions and apparent efficiency and cost effectiveness of alternative mitigation strategies. We find that alternative metrics significantly change the balance between CH4 and N2O; in some cases, alternative metrics even determine whether a specific management option would reduce or increase net farm-level emissions or emissions intensity. However, the relative ranking of different farms by profitability or emissions intensity, and the ranking of the most cost-effective mitigation options for each farm, are relatively unaffected by the metric. We conclude that alternative metrics would change the perceived significance of individual gases from agriculture and the overall cost to farmers if a price were applied to agricultural emissions, but the economically most effective response strategies are unaffected by the choice of metric.

  16. At the Mercy of Strategies: The Role of Motor Representations in Language Understanding

    PubMed Central

    Tomasino, Barbara; Rumiati, Raffaella Ida

    2013-01-01

    Classical cognitive theories hold that word representations in the brain are abstract and amodal, and are independent of the objects’ sensorimotor properties they refer to. An alternative hypothesis emphasizes the importance of bodily processes in cognition: the representation of a concept appears to be crucially dependent upon perceptual-motor processes that relate to it. Thus, understanding action-related words would rely upon the same motor structures that also support the execution of the same actions. In this context, motor simulation represents a key component. Our approach is to draw parallels between the literature on mental rotation and the literature on action verb/sentence processing. Here we will discuss recent studies on mental imagery, mental rotation, and language that clearly demonstrate how motor simulation is neither automatic nor necessary to language understanding. These studies have shown that motor representations can or cannot be activated depending on the type of strategy the participants adopt to perform tasks involving motor phrases. On the one hand, participants may imagine the movement with the body parts used to carry out the actions described by the verbs (i.e., motor strategy); on the other, individuals may solve the task without simulating the corresponding movements (i.e., visual strategy). While it is not surprising that the motor strategy is at work when participants process action-related verbs, it is however striking that sensorimotor activation has been reported also for imageable concrete words with no motor content, for “non-words” with regular phonology, for pseudo-verb stimuli, and also for negations. Based on the extant literature, we will argue that implicit motor imagery is not uniquely used when a body-related stimulus is encountered, and that it is not the type of stimulus that automatically triggers the motor simulation but the type of strategy. Finally, we will also comment on the view that sensorimotor activations are subjected to a top-down modulation. PMID:23382722

  17. Environmental evaluation of alternatives for long-term management of Defense high-level radioactive wastes at the Idaho Chemical Processing Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-09-01

    The U.S. Department of Energy (DOE) is considering the selection of a strategy for the long-term management of the defense high-level wastes at the Idaho Chemical Processing Plant (ICPP). This report describes the environmental impacts of alternative strategies. These alternative strategies include leaving the calcine in its present form at the Idaho National Engineering Laboratory (INEL), or retrieving and modifying the calcine to a more durable waste form and disposing of it either at the INEL or in an offsite repository. This report addresses only the alternatives for a program to manage the high-level waste generated at the ICPP. 24more » figures, 60 tables.« less

  18. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  19. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    PubMed Central

    Kuiken, Todd A; Hargrove, Levi J

    2014-01-01

    Objective Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main Results Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts' Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts' Law tasks with high levels of path efficiency. Significance These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control. PMID:25394366

  20. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2014-12-01

    Objective. Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach. We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main results. Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts’ Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts’ Law tasks with high levels of path efficiency. Significance. These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control.

  1. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  2. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  3. Hierarchical Fuzzy Control Applied to Parallel Connected UPS Inverters Using Average Current Sharing Scheme

    NASA Astrophysics Data System (ADS)

    Singh, Santosh Kumar; Ghatak Choudhuri, Sumit

    2018-05-01

    Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.

  4. Adapting sustainable low-carbon techologies to reduce carbon dioxide emissions from coal-fired power plants in China

    NASA Astrophysics Data System (ADS)

    Kuo, Peter Shyr-Jye

    1997-09-01

    The scientific community is deeply concerned about the effect of greenhouse-gases (GHGs) on global climate change. A major climate shift can result in tragic destruction to our world. Carbon dioxide (COsb2) emissions from coal-fired power plants are major anthropogenic sources that contribute to potential global warming. The People's Republic of China, with its rapidly growing economy and heavy dependence on coal-fired power plants for electricity, faces increasingly serious environmental challenges. This research project seeks to develop viable methodologies for reducing the potential global warming effects and serious air pollution arising from excessive coal burning. China serves as a case study for this research project. Major resolution strategies are developed through intensive literature reviews to identify sustainable technologies that can minimize adverse environmental impacts while meeting China's economic needs. The research thereby contributes technological knowledge to the field of Applied Sciences. The research also integrates modern power generation technologies with China's current and future energy requirements. With these objectives in mind, this project examines how China's environmental issues are related to China's power generation methods. This study then makes strategic recommendations that emphasize low-carbon technologies as sustainable energy generating options to be implemented in China. These low-carbon technologies consist of three options: (1) using cleaner fuels converted from China's plentiful domestic coal resources; (2) applying high-efficiency gas turbine systems for power generation; and (3) integrating coal gasification processes with energy saving combined cycle gas turbine systems. Each method can perform independently, but a combined strategy can achieve the greatest COsb2 reductions. To minimize economic impacts caused by technological changes, this study also addresses additional alternatives that can be implemented in parallel with the proposed technologies. Principal options include promoting wind, solar and biogas as alternative energies; encouraging reforestation; using economic incentives to change energy policies; and gradually replacing obsolete facilities with new power plants. This study finds that the limited capacity and associated costs of alternative energies are the main factors that prevent competition with coal-based energy in China today.

  5. Polygyny, mate-guarding, and posthumous fertilization as alternative male mating strategies.

    PubMed

    Zamudio, K R; Sinervo, B

    2000-12-19

    Alternative male mating strategies within populations are thought to be evolutionarily stable because different behaviors allow each male type to successfully gain access to females. Although alternative male strategies are widespread among animals, quantitative evidence for the success of discrete male strategies is available for only a few systems. We use nuclear microsatellites to estimate the paternity rates of three male lizard strategies previously modeled as a rock-paper-scissors game. Each strategy has strengths that allow it to outcompete one morph, and weaknesses that leave it vulnerable to the strategy of another. Blue-throated males mate-guard their females and avoid cuckoldry by yellow-throated "sneaker" males, but mate-guarding is ineffective against aggressive orange-throated neighbors. The ultradominant orange-throated males are highly polygynous and maintain large territories; they overpower blue-throated neighbors and cosire offspring with their females, but are often cuckolded by yellow-throated males. Finally, yellow-throated sneaker males sire offspring via secretive copulations and often share paternity of offspring within a female's clutch. Sneaker males sire more offspring posthumously, indicating that sperm competition may be an important component of their strategy.

  6. Unbiased Rare Event Sampling in Spatial Stochastic Systems Biology Models Using a Weighted Ensemble of Trajectories

    PubMed Central

    Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.

    2016-01-01

    The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334

  7. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    PubMed

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  9. Good Questions: Great Ways to Differentiate Mathematics Instruction

    ERIC Educational Resources Information Center

    Small, Marian

    2009-01-01

    Using differentiated instruction in the classroom can be a challenge, especially when teaching mathematics. This book cuts through the difficulties with two powerful and universal strategies that teachers can use across all math content: Open Questions and Parallel Tasks. Specific strategies and examples for grades Kindergarten - 8 are organized…

  10. Tri-state oriented parallel processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tenenbaum, J.; Wallach, Y.

    1982-08-01

    An alternating sequential/parallel system, the MOPPS was introduced a few years ago and is modified despite the fact that it solved satisfactorily a number of real-time problems. The new system, the TOPPS is described and compared to MOPPS and two applications are chosen to prove it to be superior. The advantage of having a third basic, the ring mode, is illustrated when solving sets of linear equations with band matrices. The advantage of having independent I/O for the slaves is illustrated for biomedical signal analysis. 11 references.

  11. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  12. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  13. Coordinated Post-transcriptional Regulation of Hsp70.3 Gene Expression by MicroRNA and Alternative Polyadenylation*

    PubMed Central

    Tranter, Michael; Helsley, Robert N.; Paulding, Waltke R.; McGuinness, Michael; Brokamp, Cole; Haar, Lauren; Liu, Yong; Ren, Xiaoping; Jones, W. Keith

    2011-01-01

    Heat shock protein 70 (Hsp70) is well documented to possess general cytoprotective properties in protecting the cell against stressful and noxious stimuli. We have recently shown that expression of the stress-inducible Hsp70.3 gene in the myocardium in response to ischemic preconditioning is NF-κB-dependent and necessary for the resulting late phase cardioprotection against a subsequent ischemia/reperfusion injury. Here we show that the Hsp70.3 gene product is subject to post-transcriptional regulation through parallel regulatory processes involving microRNAs and alternative polyadenylation of the mRNA transcript. First, we show that cardiac ischemic preconditioning of the in vivo mouse heart results in decreased levels of two Hsp70.3-targeting microRNAs: miR-378* and miR-711. Furthermore, an ischemic or heat shock stimulus induces alternative polyadenylation of the expressed Hsp70.3 transcript that results in the accumulation of transcripts with a shortened 3′-UTR. This shortening of the 3′-UTR results in the loss of the binding site for the suppressive miR-378* and thus renders the alternatively polyadenylated transcript insusceptible to miR-378*-mediated suppression. Results also suggest that the alternative polyadenylation-mediated shortening of the Hsp70.3 3′-UTR relieves translational suppression observed in the long 3′-UTR variant, allowing for a more robust increase in protein expression. These results demonstrate alternative polyadenylation of Hsp70.3 in parallel with ischemic or heat shock-induced up-regulation of mRNA levels and implicate the importance of this process in post-transcriptional control of Hsp70.3 expression. PMID:21757701

  14. PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin

    2017-01-01

    We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.

  15. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  16. Tabulation as a high-resolution alternative to coarse-graining protein interactions: Initial application to virus capsid subunits

    NASA Astrophysics Data System (ADS)

    Spiriti, Justin; Zuckerman, Daniel M.

    2015-12-01

    Traditional coarse-graining based on a reduced number of interaction sites often entails a significant sacrifice of chemical accuracy. As an alternative, we present a method for simulating large systems composed of interacting macromolecules using an energy tabulation strategy previously devised for small rigid molecules or molecular fragments [S. Lettieri and D. M. Zuckerman, J. Comput. Chem. 33, 268-275 (2012); J. Spiriti and D. M. Zuckerman, J. Chem. Theory Comput. 10, 5161-5177 (2014)]. We treat proteins as rigid and construct distance and orientation-dependent tables of the interaction energy between them. Arbitrarily detailed interactions may be incorporated into the tables, but as a proof-of-principle, we tabulate a simple α-carbon Gō-like model for interactions between dimeric subunits of the hepatitis B viral capsid. This model is significantly more structurally realistic than previous models used in capsid assembly studies. We are able to increase the speed of Monte Carlo simulations by a factor of up to 6700 compared to simulations without tables, with only minimal further loss in accuracy. To obtain further enhancement of sampling, we combine tabulation with the weighted ensemble (WE) method, in which multiple parallel simulations are occasionally replicated or pruned in order to sample targeted regions of a reaction coordinate space. In the initial study reported here, WE is able to yield pathways of the final ˜25% of the assembly process.

  17. Development of alternative sulfur dioxide control strategies for a metropolitan area and its environs, utilizing a modified climatological dispersion model

    Treesearch

    K. J. Skipka; D. B. Smith

    1977-01-01

    Alternative control strategies were developed for achieving compliance with ambient air quality standards in Portland, Maine, and its environs, using a modified climatological dispersion model (CDM) and manipulating the sulfur content of the fuel oil consumed in four concentric zones. Strategies were evaluated for their impact on ambient air quality, economics, and...

  18. Alternative/Complementary Approaches to Treatment of Children with Autism Spectrum Disorders.

    ERIC Educational Resources Information Center

    Levy, Susan E.; Hyman, Susan L.

    2002-01-01

    This article reviews common complementary or alternative medicine (CAM) treatments used to address symptoms of autistic spectrum disorders, including vitamin supplements, medications, antibiotics, antifungals, diet strategies, chelation/mercury detoxification, and nonbiologic treatments. Strategies that professionals may use in assessing the…

  19. Scalability and Portability of Two Parallel Implementations of ADI

    NASA Technical Reports Server (NTRS)

    Phung, Thanh; VanderWijngaart, Rob F.

    1994-01-01

    Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.

  20. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  1. Expressing Parallelism with ROOT

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. MLP: A Parallel Programming Alternative to MPI for New Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    1999-01-01

    Recent developments at the NASA AMES Research Center's NAS Division have demonstrated that the new generation of NUMA based Symmetric Multi-Processing systems (SMPs), such as the Silicon Graphics Origin 2000, can successfully execute legacy vector oriented CFD production codes at sustained rates far exceeding processing rates possible on dedicated 16 CPU Cray C90 systems. This high level of performance is achieved via shared memory based Multi-Level Parallelism (MLP). This programming approach, developed at NAS and outlined below, is distinct from the message passing paradigm of MPI. It offers parallelism at both the fine and coarse grained level, with communication latencies that are approximately 50-100 times lower than typical MPI implementations on the same platform. Such latency reductions offer the promise of performance scaling to very large CPU counts. The method draws on, but is also distinct from, the newly defined OpenMP specification, which uses compiler directives to support a limited subset of multi-level parallel operations. The NAS MLP method is general, and applicable to a large class of NASA CFD codes.

  3. Expressing Parallelism with ROOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piparo, D.; Tejedor, E.; Guiraud, E.

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less

  4. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  5. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  6. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  7. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  8. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  9. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  10. Increasing processor utilization during parallel computation rundown

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1986-01-01

    Some parallel processing environments provide for asynchronous execution and completion of general purpose parallel computations from a single computational phase. When all the computations from such a phase are complete, a new parallel computational phase is begun. Depending upon the granularity of the parallel computations to be performed, there may be a shortage of available work as a particular computational phase draws to a close (computational rundown). This can result in the waste of computing resources and the delay of the overall problem. In many practical instances, strict sequential ordering of phases of parallel computation is not totally required. In such cases, the beginning of one phase can be correctly computed before the end of a previous phase is completed. This allows additional work to be generated somewhat earlier to keep computing resources busy during each computational rundown. The conditions under which this can occur are identified and the frequency of occurrence of such overlapping in an actual parallel Navier-Stokes code is reported. A language construct is suggested and possible control strategies for the management of such computational phase overlapping are discussed.

  11. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  12. Comparative field trial of alternative vector control strategies for non-domiciliated Triatoma dimidiata.

    PubMed

    Ferral, Jhibran; Chavez-Nuñez, Leysi; Euan-Garcia, Maria; Ramirez-Sierra, Maria Jesus; Najera-Vazquez, M Rosario; Dumonteil, Eric

    2010-01-01

    Chagas disease is a major vector-borne disease, and regional initiatives based on insecticide spraying have successfully controlled domiciliated vectors in many regions. Non-domiciliated vectors remain responsible for a significant transmission risk, and their control is a challenge. We performed a proof-of-concept field trial to test alternative strategies in rural Yucatan, Mexico. Follow-up of house infestation for two seasons following the interventions confirmed that insecticide spraying should be performed annually for the effective control of Triatoma dimidiata; however, it also confirmed that insect screens or long-lasting impregnated curtains may represent good alternative strategies for the sustained control of these vectors. Ecosystemic peridomicile management would be an excellent complementary strategy to improve the cost-effectiveness of interventions. Because these strategies would also be effective against other vector-borne diseases, such as malaria or dengue, they could be integrated within a multi-disease control program.

  13. Resources Management Strategy For Mud Crabs (Scylla spp.) In Pemalang Regency

    NASA Astrophysics Data System (ADS)

    Purnama Fitri, Aristi Dian; Boesono, Herry; Sabdono, Agus; Adlina, Nadia

    2017-02-01

    The aim of this research is to develop resources management strategies of mud crab (Scylla spp.) in Pemalang Regency. The method used is descriptive survey in a case study. This research used primary data and secondary data. Primary data were collected through field observations and in-depth interviews with key stakeholders. Secondary data were collected from related publications and documents issued by the competent institutions. SWOT Analysis was used to inventory the strengths, weaknesses, opportunities and threats. TOWS matrix was used to develop an alternative of resources management strategies. SWOT analysis was obtained by 6 alternative strategies that can be applied for optimization of fisheries development in Pemalang Regency. The strategies is the control of mud crab fishing gear, restricted size allowable in mud crab, control of mud crab fishing season, catch monitoring of mud crab, needs a management institutions which ensure the implementation of the regulation, and implementation for mud crab aquaculture. Each alternative strategy can be synergized to optimize the resources development in Pemalang Regency.

  14. Water-fat separation with parallel imaging based on BLADE.

    PubMed

    Weng, Dehe; Pan, Yanli; Zhong, Xiaodong; Zhuo, Yan

    2013-06-01

    Uniform suppression of fat signal is desired in clinical applications. Based on phase differences introduced by different chemical shift frequencies, Dixon method and its variations are used as alternatives of fat saturation methods, which are sensitive to B0 inhomogeneities. Iterative Decomposition of water and fat with Echo Asymmetry and Least squares estimation (IDEAL) separates water and fat images with flexible echo shifting. Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER, alternatively termed as BLADE), in conjunction with IDEAL, yields Turboprop IDEAL (TP-IDEAL) and allows for decomposition of water and fat signal with motion correction. However, the flexibility of its parameter setting is limited, and the related phase correction is complicated. To address these problems, a novel method, BLADE-Dixon, is proposed in this study. This method used the same polarity readout gradients (fly-back gradients) to acquire in-phase and opposed-phases images, which led to less complicated phase correction and more flexible parameter setting compared to TP-IDEAL. Parallel imaging and undersampling were integrated to reduce scan time. Phantom, orbit, neck and knee images were acquired with BLADE-Dixon. Water-fat separation results were compared to those measured with conventional turbo spin echo (TSE) Dixon and TSE with fat saturation, respectively, to demonstrate the performance of BLADE-Dixon. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  16. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  17. Distinct lateral inhibitory circuits drive parallel processing of sensory information in the mammalian olfactory bulb

    PubMed Central

    Geramita, Matthew A; Burton, Shawn D; Urban, Nathan N

    2016-01-01

    Splitting sensory information into parallel pathways is a common strategy in sensory systems. Yet, how circuits in these parallel pathways are composed to maintain or even enhance the encoding of specific stimulus features is poorly understood. Here, we have investigated the parallel pathways formed by mitral and tufted cells of the olfactory system in mice and characterized the emergence of feature selectivity in these cell types via distinct lateral inhibitory circuits. We find differences in activity-dependent lateral inhibition between mitral and tufted cells that likely reflect newly described differences in the activation of deep and superficial granule cells. Simulations show that these circuit-level differences allow mitral and tufted cells to best discriminate odors in separate concentration ranges, indicating that segregating information about different ranges of stimulus intensity may be an important function of these parallel sensory pathways. DOI: http://dx.doi.org/10.7554/eLife.16039.001 PMID:27351103

  18. Portable multi-node LQCD Monte Carlo simulations using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.

  19. Hunters' acceptability of the surveillance system and alternative surveillance strategies for classical swine fever in wild boar - a participatory approach.

    PubMed

    Schulz, Katja; Calba, Clémentine; Peyre, Marisa; Staubach, Christoph; Conraths, Franz J

    2016-09-06

    Surveillance measures can only be effective if key players in the system accept them. Acceptability, which describes the willingness of persons to contribute, is often analyzed using participatory methods. Participatory epidemiology enables the active involvement of key players in the assessment of epidemiological issues. In the present study, we used a participatory method recently developed by CIRAD (Centre de Coopération Internationale en Recherche Agronomique pour le Développement) to evaluate the functionality and acceptability of Classical Swine Fever (CSF) surveillance in wild boar in Germany, which is highly dependent on the participation of hunters. The acceptability of alternative surveillance strategies was also analyzed. By conducting focus group discussions, potential vulnerabilities in the system were detected and feasible alternative surveillance strategies identified. Trust in the current surveillance system is high, whereas the acceptability of the operation of the system is medium. Analysis of the acceptability of alternative surveillance strategies showed how risk-based surveillance approaches can be combined to develop strategies that have sufficient support and functionality. Furthermore, some surveillance strategies were clearly rejected by the hunters. Thus, the implementation of such strategies may be difficult. Participatory methods can be used to evaluate the functionality and acceptability of existing surveillance plans for CSF among hunters and to optimize plans regarding their chances of successful implementation.

  20. A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TETRAHEDRAL DOMAINS

    PubMed Central

    Fu, Zhisong; Kirby, Robert M.; Whitaker, Ross T.

    2014-01-01

    Generating numerical solutions to the eikonal equation and its many variations has a broad range of applications in both the natural and computational sciences. Efficient solvers on cutting-edge, parallel architectures require new algorithms that may not be theoretically optimal, but that are designed to allow asynchronous solution updates and have limited memory access patterns. This paper presents a parallel algorithm for solving the eikonal equation on fully unstructured tetrahedral meshes. The method is appropriate for the type of fine-grained parallelism found on modern massively-SIMD architectures such as graphics processors and takes into account the particular constraints and capabilities of these computing platforms. This work builds on previous work for solving these equations on triangle meshes; in this paper we adapt and extend previous two-dimensional strategies to accommodate three-dimensional, unstructured, tetrahedralized domains. These new developments include a local update strategy with data compaction for tetrahedral meshes that provides solutions on both serial and parallel architectures, with a generalization to inhomogeneous, anisotropic speed functions. We also propose two new update schemes, specialized to mitigate the natural data increase observed when moving to three dimensions, and the data structures necessary for efficiently mapping data to parallel SIMD processors in a way that maintains computational density. Finally, we present descriptions of the implementations for a single CPU, as well as multicore CPUs with shared memory and SIMD architectures, with comparative results against state-of-the-art eikonal solvers. PMID:25221418

  1. Parallel approach for bioinspired algorithms

    NASA Astrophysics Data System (ADS)

    Zaporozhets, Dmitry; Zaruba, Daria; Kulieva, Nina

    2018-05-01

    In the paper, a probabilistic parallel approach based on the population heuristic, such as a genetic algorithm, is suggested. The authors proposed using a multithreading approach at the micro level at which new alternative solutions are generated. On each iteration, several threads that independently used the same population to generate new solutions can be started. After the work of all threads, a selection operator combines obtained results in the new population. To confirm the effectiveness of the suggested approach, the authors have developed software on the basis of which experimental computations can be carried out. The authors have considered a classic optimization problem – finding a Hamiltonian cycle in a graph. Experiments show that due to the parallel approach at the micro level, increment of running speed can be obtained on graphs with 250 and more vertices.

  2. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  3. Exploration of operator method digital optical computers for application to NASA

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.

  4. Training Pragmatic Language Skills through Alternate Strategies with a Blind Multiply Handicapped Child.

    ERIC Educational Resources Information Center

    Evans, C. J.; Johnson, C. J.

    1988-01-01

    A blind multiply handicapped preschooler was taught to respond appropriately to two adjacency pair types ("where question-answer" and "comment-acknowledgement"). The two alternative language acquisition strategies available to blind children were encouraged: echolalia to maintain communicative interactions and manual searching…

  5. Multi-Objective and Multidisciplinary Design Optimisation (MDO) of UAV Systems using Hierarchical Asynchronous Parallel Evolutionary Algorithms

    DTIC Science & Technology

    2007-09-17

    been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave

  6. Parallel Software Model Checking

    DTIC Science & Technology

    2015-01-08

    checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08

  7. Workgroup Report: Incorporating In Vitro Alternative Methods for Developmental Neurotoxicity into International Hazard and Risk Assessment Strategies

    PubMed Central

    Coecke, Sandra; Goldberg, Alan M; Allen, Sandra; Buzanska, Leonora; Calamandrei, Gemma; Crofton, Kevin; Hareng, Lars; Hartung, Thomas; Knaut, Holger; Honegger, Paul; Jacobs, Miriam; Lein, Pamela; Li, Abby; Mundy, William; Owen, David; Schneider, Steffen; Silbergeld, Ellen; Reum, Torsten; Trnovec, Tomas; Monnet-Tschudi, Florianne; Bal-Price, Anna

    2007-01-01

    This is the report of the first workshop on Incorporating In Vitro Alternative Methods for Developmental Neurotoxicity (DNT) Testing into International Hazard and Risk Assessment Strategies, held in Ispra, Italy, on 19–21 April 2005. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and jointly organized by ECVAM, the European Chemical Industry Council, and the Johns Hopkins University Center for Alternatives to Animal Testing. The primary aim of the workshop was to identify and catalog potential methods that could be used to assess how data from in vitro alternative methods could help to predict and identify DNT hazards. Working groups focused on two different aspects: a) details on the science available in the field of DNT, including discussions on the models available to capture the critical DNT mechanisms and processes, and b) policy and strategy aspects to assess the integration of alternative methods in a regulatory framework. This report summarizes these discussions and details the recommendations and priorities for future work. PMID:17589601

  8. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  9. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    PubMed Central

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  10. Emergent Rules for Codon Choice Elucidated by Editing Rare Arginine Codons in Escherichia coli

    DTIC Science & Technology

    2016-09-20

    alternative codons are more likely to be viable. To evaluate synonymous and nonsynonymous alternatives to essential AGRs further, we imple- mented a CRISPR ... Crispr -assisted MAGE). First, we designed oligos that changed not only the target AGR codon to NNN but also made several synonymous changes at least 50...nt downstream that would disrupt a 20-bp CRISPR target lo- cus. MAGE was used to replace each AGR with NNN in parallel, and CRISPR /cas9 was used to

  11. Multi-stage separations based on dielectrophoresis

    DOEpatents

    Mariella, Jr., Raymond P.

    2004-07-13

    A system utilizing multi-stage traps based on dielectrophoresis. Traps with electrodes arranged transverse to the flow and traps with electrodes arranged parallel to the flow with combinations of direct current and alternating voltage are used to trap, concentrate, separate, and/or purify target particles.

  12. Get the LED Out.

    ERIC Educational Resources Information Center

    Jewett, John W., Jr.

    1991-01-01

    Describes science demonstrations with light-emitting diodes that include electrical concepts of resistance, direct and alternating current, sine wave versus square wave, series and parallel circuits, and Faraday's Law; optics concepts of real and virtual images, photoresistance, and optical communication; and modern physics concepts of spectral…

  13. Special Issues on Learning Strategies: Parallels and Contrasts between Australian and Chinese Tertiary Education

    ERIC Educational Resources Information Center

    Yao, Yuzuo

    2017-01-01

    Learning strategies are crucial to student learning in higher education. In this paper, there are comparisons of student engagement, feedback mechanism and workload arrangements at some typical universities in Australia and China, which are followed by practical suggestions for active learning. First, an inclusive class would allow learners from…

  14. Student reasoning about graphs in different contexts

    NASA Astrophysics Data System (ADS)

    Ivanjek, Lana; Susac, Ana; Planinic, Maja; Andrasevic, Aneta; Milin-Sipus, Zeljka

    2016-06-01

    This study investigates university students' graph interpretation strategies and difficulties in mathematics, physics (kinematics), and contexts other than physics. Eight sets of parallel (isomorphic) mathematics, physics, and other context questions about graphs, which were developed by us, were administered to 385 first-year students at the Faculty of Science, University of Zagreb. Students were asked to provide explanations and/or mathematical procedures with their answers. Students' main strategies and difficulties identified through the analysis of those explanations and procedures are described. Student strategies of graph interpretation were found to be largely context dependent and domain specific. A small fraction of students have used the same strategy in all three domains (mathematics, physics, and other contexts) on most sets of parallel questions. Some students have shown indications of transfer of knowledge in the sense that they used techniques and strategies developed in physics for solving (or attempting to solve) other context problems. In physics, the preferred strategy was the use of formulas, which sometimes seemed to block the use of other, more productive strategies which students displayed in other domains. Students' answers indicated the presence of slope-height confusion and interval-point confusion in all three domains. Students generally better interpreted graph slope than the area under a graph, although the concept of slope still seemed to be quite vague for many. The interpretation of the concept of area under a graph needs more attention in both physics and mathematics teaching.

  15. OpenCL based machine learning labeling of biomedical datasets

    NASA Astrophysics Data System (ADS)

    Amoros, Oscar; Escalera, Sergio; Puig, Anna

    2011-03-01

    In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.

  16. Polygyny, mate-guarding, and posthumous fertilization as alternative male mating strategies

    PubMed Central

    Zamudio, Kelly R.; Sinervo, Barry

    2000-01-01

    Alternative male mating strategies within populations are thought to be evolutionarily stable because different behaviors allow each male type to successfully gain access to females. Although alternative male strategies are widespread among animals, quantitative evidence for the success of discrete male strategies is available for only a few systems. We use nuclear microsatellites to estimate the paternity rates of three male lizard strategies previously modeled as a rock-paper-scissors game. Each strategy has strengths that allow it to outcompete one morph, and weaknesses that leave it vulnerable to the strategy of another. Blue-throated males mate-guard their females and avoid cuckoldry by yellow-throated “sneaker” males, but mate-guarding is ineffective against aggressive orange-throated neighbors. The ultradominant orange-throated males are highly polygynous and maintain large territories; they overpower blue-throated neighbors and cosire offspring with their females, but are often cuckolded by yellow-throated males. Finally, yellow-throated sneaker males sire offspring via secretive copulations and often share paternity of offspring within a female's clutch. Sneaker males sire more offspring posthumously, indicating that sperm competition may be an important component of their strategy. PMID:11106369

  17. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by routing through transporter nodes

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a destination. Some packets are constrained to be routed through respective designated transporter nodes, the automated routing strategy determining a path from a respective source node to a respective transporter node, and from a respective transporter node to a respective destination node. Preferably, the source node chooses a routing policy from among multiple possible choices, and that policy is followed by all intermediate nodes. The use of transporter nodes allows greater flexibility in routing.

  18. Parallel firing strategy on Petri nets: A review

    NASA Astrophysics Data System (ADS)

    Mavlankulov, Gairatzhan; Turaev, Sherzod; Zhumabaeva, Laula; Zhukabayeva, Tamara

    2015-05-01

    In this paper we review the recent results related on Petri net controlled grammars and the close related topics. Though the theme of regulated grammars is one of the classic topics in formal language theory, a Petri net controlled grammar is still interesting subject for the investigation for many reasons. This type of grammars can successfully be used in modeling new problems emerging in manufacturing systems, systems biology and other areas. Moreover, the graphically illustrability, the ability to represent both a grammar and its control in one structure, and the possibility to unify different regulated rewritings make this formalization attractive for the study. We also summarize the obtained results and propose a new conception such as parallel firing strategy on Petri Nets.

  19. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  20. Strategies in probabilistic feedback learning in Parkinson patients OFF medication.

    PubMed

    Bellebaum, C; Kobza, S; Ferrea, S; Schnitzler, A; Pollok, B; Südmeyer, M

    2016-04-21

    Studies on classification learning suggested that altered dopamine function in Parkinson's Disease (PD) specifically affects learning from feedback. In patients OFF medication, enhanced learning from negative feedback has been described. This learning bias was not seen in observational learning from feedback, indicating different neural mechanisms for this type of learning. The present study aimed to compare the acquisition of stimulus-response-outcome associations in PD patients OFF medication and healthy control subjects in active and observational learning. 16 PD patients OFF medication and 16 controls were examined with three parallel learning tasks each, two feedback-based (active and observational) and one non-feedback-based paired associates task. No acquisition deficit was seen in the patients for any of the tasks. More detailed analyses on the learning strategies did, however, reveal that the patients showed more lose-shift responses during active feedback learning than controls, and that lose-shift and win-stay responses more strongly determined performance accuracy in patients than controls. For observational feedback learning, the performance of both groups correlated similarly with the performance in non-feedback-based paired associates learning and with the accuracy of observed performance. Also, patients and controls showed comparable evidence of feedback processing in observational learning. In active feedback learning, PD patients use alternative learning strategies than healthy controls. Analyses on observational learning did not yield differences between patients and controls, adding to recent evidence of a differential role of the human striatum in active and observational learning from feedback. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Integrating enzyme immobilization and protein engineering: An alternative path for the development of novel and improved industrial biocatalysts.

    PubMed

    Bernal, Claudia; Rodríguez, Karen; Martínez, Ronny

    2018-06-09

    Enzyme immobilization often achieves reusable biocatalysts with improved operational stability and solvent resistance. However, these modifications are generally associated with a decrease in activity or detrimental modifications in catalytic properties. On the other hand, protein engineering aims to generate enzymes with increased performance at specific conditions by means of genetic manipulation, directed evolution and rational design. However, the achieved biocatalysts are generally generated as soluble enzymes, -thus not reusable- and their performance under real operational conditions is uncertain. Combined protein engineering and enzyme immobilization approaches have been employed as parallel or consecutive strategies for improving an enzyme of interest. Recent reports show efforts on simultaneously improving both enzymatic and immobilization components through genetic modification of enzymes and optimizing binding chemistry for site-specific and oriented immobilization. Nonetheless, enzyme engineering and immobilization are usually performed as separate workflows to achieve improved biocatalysts. In this review, we summarize and discuss recent research aiming to integrate enzyme immobilization and protein engineering and propose strategies to further converge protein engineering and enzyme immobilization efforts into a novel "immobilized biocatalyst engineering" research field. We believe that through the integration of both enzyme engineering and enzyme immobilization strategies, novel biocatalysts can be obtained, not only as the sum of independently improved intrinsic and operational properties of enzymes, but ultimately tailored specifically for increased performance as immobilized biocatalysts, potentially paving the way for a qualitative jump in the development of efficient, stable biocatalysts with greater real-world potential in challenging bioprocess applications. Copyright © 2018. Published by Elsevier Inc.

  2. COT drives resistance to RAF inhibition through MAP kinase pathway reactivation.

    PubMed

    Johannessen, Cory M; Boehm, Jesse S; Kim, So Young; Thomas, Sapana R; Wardwell, Leslie; Johnson, Laura A; Emery, Caroline M; Stransky, Nicolas; Cogdill, Alexandria P; Barretina, Jordi; Caponigro, Giordano; Hieronymus, Haley; Murray, Ryan R; Salehi-Ashtiani, Kourosh; Hill, David E; Vidal, Marc; Zhao, Jean J; Yang, Xiaoping; Alkan, Ozan; Kim, Sungjoon; Harris, Jennifer L; Wilson, Christopher J; Myer, Vic E; Finan, Peter M; Root, David E; Roberts, Thomas M; Golub, Todd; Flaherty, Keith T; Dummer, Reinhard; Weber, Barbara L; Sellers, William R; Schlegel, Robert; Wargo, Jennifer A; Hahn, William C; Garraway, Levi A

    2010-12-16

    Oncogenic mutations in the serine/threonine kinase B-RAF (also known as BRAF) are found in 50-70% of malignant melanomas. Pre-clinical studies have demonstrated that the B-RAF(V600E) mutation predicts a dependency on the mitogen-activated protein kinase (MAPK) signalling cascade in melanoma-an observation that has been validated by the success of RAF and MEK inhibitors in clinical trials. However, clinical responses to targeted anticancer therapeutics are frequently confounded by de novo or acquired resistance. Identification of resistance mechanisms in a manner that elucidates alternative 'druggable' targets may inform effective long-term treatment strategies. Here we expressed ∼600 kinase and kinase-related open reading frames (ORFs) in parallel to interrogate resistance to a selective RAF kinase inhibitor. We identified MAP3K8 (the gene encoding COT/Tpl2) as a MAPK pathway agonist that drives resistance to RAF inhibition in B-RAF(V600E) cell lines. COT activates ERK primarily through MEK-dependent mechanisms that do not require RAF signalling. Moreover, COT expression is associated with de novo resistance in B-RAF(V600E) cultured cell lines and acquired resistance in melanoma cells and tissue obtained from relapsing patients following treatment with MEK or RAF inhibitors. We further identify combinatorial MAPK pathway inhibition or targeting of COT kinase activity as possible therapeutic strategies for reducing MAPK pathway activation in this setting. Together, these results provide new insights into resistance mechanisms involving the MAPK pathway and articulate an integrative approach through which high-throughput functional screens may inform the development of novel therapeutic strategies.

  3. Competitive PCR-High Resolution Melting Analysis (C-PCR-HRMA) for large genomic rearrangements (LGRs) detection: A new approach to assess quantitative status of BRCA1 gene in a reference laboratory.

    PubMed

    Minucci, Angelo; De Paolis, Elisa; Concolino, Paola; De Bonis, Maria; Rizza, Roberta; Canu, Giulia; Scaglione, Giovanni Luca; Mignone, Flavio; Scambia, Giovanni; Zuppi, Cecilia; Capoluongo, Ettore

    2017-07-01

    Evaluation of copy number variation (CNV) in BRCA1/2 genes, due to large genomic rearrangements (LGRs), is a mandatory analysis in hereditary breast and ovarian cancers families, if no pathogenic variants are found by sequencing. LGRs cannot be detected by conventional methods and several alternative methods have been developed. Since these approaches are expensive and time consuming, identification of alternative screening methods for LGRs detection is needed in order to reduce and optimize the diagnostic procedure. The aim of this study was to investigate a Competitive PCR-High Resolution Melting Analysis (C-PCR-HRMA) as molecular tool to detect recurrent BRCA1 LGRs. C-PCR-HRMA was performed on exons 3, 14, 18, 19, 20 and 21 of the BRCA1 gene; exons 4, 6 and 7 of the ALB gene were used as reference fragments. This study showed that it is possible to identify recurrent BRCA1 LGRs, by melting peak height ratio between target (BRCA1) and reference (ALB) fragments. Furthermore, we underline that a peculiar amplicon-melting profile is associated to a specific BRCA1 LGR. All C-PCR-HRMA results were confirmed by Multiplex ligation-dependent probe amplification. C-PCR-HRMA has proved to be an innovative, efficient and fast method for BRCA1 LGRs detection. Given the sensitivity, specificity and ease of use, c-PCR-HRMA can be considered an attractive and powerful alternative to other methods for BRCA1 CNVs screening, improving molecular strategies for BRCA testing in the context of Massive Parallel Sequencing. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. 76 FR 27850 - Irish Potatoes Grown in Washington; Modification of the Rules and Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... continue exploring alternative marketing strategies. DATES: Effective July 1, 2011; comments received by... opportunity to continue exploring alternative marketing strategies. The authority for regulation is provided... DEPARTMENT OF AGRICULTURE Agricultural Marketing Service 7 CFR Part 946 [Doc. No. AMS-FV-11-0024...

  5. Alternative Youth Employment Strategies Project: Final Report.

    ERIC Educational Resources Information Center

    Sadd, Susan; And Others

    The Alternative Youth Employment Strategies (AYES) Project began as one of the demonstration projects funded under the Youth Employment and Demonstration Project Act in 1980. The program, which features three training models, is targeted toward high-risk, disadvantaged youth, especially minority youths from urban areas who had prior involvement…

  6. Reaching Students in Online Courses Using Alternative Formats

    ERIC Educational Resources Information Center

    Fidaldo, Patricia; Thormann, Joan

    2017-01-01

    This research was conducted to explore whether students enrolled in graduate level courses found some Universal Design for Learning (UDL) strategies useful and if they actually used them. The strategies we investigated were presenting course information in alternative formats including PowerPoints with voiceover, screencasts, and videos as an…

  7. Optimizing efficiency of height modeling for extensive forest inventories.

    Treesearch

    T.M. Barrett

    2006-01-01

    Although critical to monitoring forest ecosystems, inventories are expensive. This paper presents a generalizable method for using an integer programming model to examine tradeoffs between cost and estimation error for alternative measurement strategies in forest inventories. The method is applied to an example problem of choosing alternative height-modeling strategies...

  8. Strategies Reported Used by Instructors to Address Student Alternate Conceptions in Chemical Equilibrium

    ERIC Educational Resources Information Center

    Piquette, Jeff S.; Heikkinen, Henry W.

    2005-01-01

    This study explores general-chemistry instructors' awareness of and ability to identify and address common student learning obstacles in chemical equilibrium. Reported instructor strategies directed at remediating student alternate conceptions were investigated and compared with successful, literature-based conceptual change methods. Fifty-two…

  9. Identify and Translate Learnings from On-Going Assay ...

    EPA Pesticide Factsheets

    Presentation for FDA-CFSAN ILSI workshop on State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments Presentation for FDA-CFSAN ILSI workshop on State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments

  10. 20170228 - Identify and Translate Learnings from On-Going ...

    EPA Pesticide Factsheets

    Presentation for FDA-CFSAN ILSI workshop on State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments Presentation for FDA-CFSAN ILSI workshop on State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments

  11. Alternatives Reality: What to Expect from Future Allocations

    ERIC Educational Resources Information Center

    Sedlacek, Verne O.

    2014-01-01

    For well more than a decade, the "endowment model" of investing has been synonymous with increasing allocations to alternative investment strategies, defined largely as hedge funds, private real estate, private equity and venture capital and other, generally less liquid or illiquid strategies compared to public markets. This trend…

  12. STRATOP: A Model for Designing Effective Product and Communication Strategies. Paper No. 470.

    ERIC Educational Resources Information Center

    Pessemier, Edgar A.

    The STRATOP algorithm was developed to help planners and proponents find and test effectively designed choice objects and communication strategies. Choice objects can range from complex social, scientific, military, or educational alternatives to simple economic alternatives between assortments of branded convenience goods. Two classes of measured…

  13. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  14. Parallel dispatch: a new paradigm of electrical power system dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Wang, Fei-Yue; Wang, Qiang

    Modern power systems are evolving into sociotechnical systems with massive complexity, whose real-time operation and dispatch go beyond human capability. Thus, the need for developing and applying new intelligent power system dispatch tools are of great practical significance. In this paper, we introduce the overall business model of power system dispatch, the top level design approach of an intelligent dispatch system, and the parallel intelligent technology with its dispatch applications. We expect that a new dispatch paradigm, namely the parallel dispatch, can be established by incorporating various intelligent technologies, especially the parallel intelligent technology, to enable secure operation of complexmore » power grids, extend system operators U+02BC capabilities, suggest optimal dispatch strategies, and to provide decision-making recommendations according to power system operational goals.« less

  15. Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke; Wu, Kesheng; Bethel, E. Wes

    2009-06-02

    The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less

  16. Parallel compression/decompression-based datapath architecture for multibeam mask writers

    NASA Astrophysics Data System (ADS)

    Chaudhary, Narendra; Savari, Serap A.

    2017-06-01

    Multibeam electron beam systems will be used in the future for mask writing and for complimentary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements Amdahl's Law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time we propose an alternate datapath architecture partly motivated by multibeam direct write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.

  17. Parallel compression/decompression-based datapath architecture for multibeam mask writers

    NASA Astrophysics Data System (ADS)

    Chaudhary, Narendra; Savari, Serap A.

    2017-10-01

    Multibeam electron beam systems will be used in the future for mask writing and for complementary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements, Amdahl's law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time, we propose an alternate datapath architecture partly motivated by multibeam direct-write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.

  18. A parallel genome-wide RNAi screening strategy to identify host proteins important for entry of Marburg virus and H5N1 influenza virus.

    PubMed

    Cheng, Han; Koning, Katie; O'Hearn, Aileen; Wang, Minxiu; Rumschlag-Booms, Emily; Varhegyi, Elizabeth; Rong, Lijun

    2015-11-24

    Genome-wide RNAi screening has been widely used to identify host proteins involved in replication and infection of different viruses, and numerous host factors are implicated in the replication cycles of these viruses, demonstrating the power of this approach. However, discrepancies on target identification of the same viruses by different groups suggest that high throughput RNAi screening strategies need to be carefully designed, developed and optimized prior to the large scale screening. Two genome-wide RNAi screens were performed in parallel against the entry of pseudotyped Marburg viruses and avian influenza virus H5N1 utilizing an HIV-1 based surrogate system, to identify host factors which are important for virus entry. A comparative analysis approach was employed in data analysis, which alleviated systematic positional effects and reduced the false positive number of virus-specific hits. The parallel nature of the strategy allows us to easily identify the host factors for a specific virus with a greatly reduced number of false positives in the initial screen, which is one of the major problems with high throughput screening. The power of this strategy is illustrated by a genome-wide RNAi screen for identifying the host factors important for Marburg virus and/or avian influenza virus H5N1 as described in this study. This strategy is particularly useful for highly pathogenic viruses since pseudotyping allows us to perform high throughput screens in the biosafety level 2 (BSL-2) containment instead of the BSL-3 or BSL-4 for the infectious viruses, with alleviated safety concerns. The screening strategy together with the unique comparative analysis approach makes the data more suitable for hit selection and enables us to identify virus-specific hits with a much lower false positive rate.

  19. Alternative approaches to conventional treatment of acute uncomplicated urinary tract infection in women.

    PubMed

    Foxman, Betsy; Buxton, Miatta

    2013-04-01

    The increasing resistance of uropathogens to antibiotics and recognition of the generally self-limiting nature of uncomplicated urinary tract infection (UTI) suggest that it is time to reconsider empirical treatment of UTI using antibiotics. Identifying new and effective strategies to prevent recurrences and alternative treatment strategies are a high priority. We review the recent literature regarding the effects of functional food products, probiotics, vaccines, and alternative treatments on treating and preventing UTI.

  20. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  1. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  2. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  3. Questionnaire Construction Manual

    DTIC Science & Technology

    1976-07-01

    fwiW ........ ..., „.,. , r-m-lili^fa^BMiai igMiit VI-C Page 3 1 Jul 76 (2) All questionnaire items should be gramatically correct. (3) All...kept in mind: a. All response alternatives should follow the stem both gramatically and logically, and if possible, be parallel in structure. b

  4. Broadband hybrid electromagnetic and piezoelectric energy harvesting from ambient vibrations and pneumatic vortices induced by running subway trains.

    DOT National Transportation Integrated Search

    2017-05-01

    The airfoil-based electromagnetic energy harvester containing parallel array motion between moving coil and : trajectory matching multi-pole magnets was investigated. The magnets were aligned in an alternatively : magnetized formation of 6 magnets to...

  5. Sewage Reflects the Distriubtion of Human Faecal Lachnospiraceae

    EPA Science Inventory

    Faecal pollution contains a rich and diverse community of bacteria derived from animals and humans,many of which might serve as alternatives to the traditional enterococci and Escherichia coli faecal indicators. We used massively parallel sequencing (MPS)of the 16S rRNA gene to ...

  6. Flow cytometry for enrichment and titration in massively parallel DNA sequencing

    PubMed Central

    Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim

    2009-01-01

    Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748

  7. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less

  8. SEPARATION OF GASES BY DIFFUSIION

    DOEpatents

    Peierls, R.E.; Simon, F.E.; Arms, H.S.

    1960-12-13

    A method and apparatus are given for the separation of mixtures of gaseous or vaporous media by diffusion through a permeable membrane. The apparatus consists principally of a housing member having an elongated internal chamber dissected longitudinally by a permeable membrane. Means are provided for producing a pressure difference between opposite sides of the membrane to cause a flow of the media in the chamber therethrough. This pressure difference is alternated between opposite sides of the membrane to produce an oscillating flow through the membrane. Additional means is provided for producing flow parallel to the membrane in opposite directions on the two sides thereof and of the same frequency and in phase with the alternating pressure difference. The lighter molecules diffuse through the membrane more readily than the heavier molecules and the parallel flow effects a net transport of the lighter molecules in one direction and the heavier molecules in the opposite direction within the chamber. By these means a concentration gradient along the chamber is established.

  9. Separation of gases by diffusion

    DOEpatents

    Peieris, R. E.; Simon, F. E.; Arms, H. S.

    1960-12-13

    An apparatus is described for the separation of mixtures of gaseous or vaporous media by diffusion through a permeable membrane. The apparatus consists principally of a housing member having an elongated internal chamber dissected longitudinally by a permeable membrane. Means are provided for producing a pressure difference between opposite sides of the membrane to cause a flow of the media in the chamber therethrough. This pressure difference is alternated between opposite sides of the membrane to produce an oscillating flow through the membrane. Additional means is provided for producing flow parallel to the membrane in opposite directions on the two sides thereof and of the same frequency and in phase wlth the alternating pressure difference. The lighter molecules diffuse through the membrane more readily than the heavier molecules and the parallel flow effects a net transport of the lighter molecules in one direction and the heavier molecules in the opposite direction wlthin the chamber. By these means a concentration gradient along the chamber is established. (auth)

  10. Solar array construction

    NASA Technical Reports Server (NTRS)

    Crouthamel, Marvin S. (Inventor); Coyle, Peter J. (Inventor)

    1982-01-01

    An interconnect tab on each cell of a first set of circular solar cells connects that cell in series with an adjacent cell in the set. This set of cells is arranged in alternate columns and rows of an array and a second set of similar cells is arranged in the remaining alternate columns and rows of the array. Three interconnect tabs on each solar cell of the said second set are employed to connect the cells of the second set to one another, in series and to connect the cells of the second set to those of the first set in parallel. Some tabs (making parallel connections) connect the same surface regions of adjacent cells to one another and others (making series connections) connect a surface region of one cell to the opposite surface region of an adjacent cell; however, the tabs are so positioned that the array may be easily assembled by depositing the cells in a certain sequence and in proper orientation.

  11. Challenging the epidemiologic evidence on passive smoking: tactics of tobacco industry expert witnesses.

    PubMed

    Francis, John A; Shea, Amy K; Samet, Jonathan M

    2006-12-01

    To analyse the statements given by tobacco industry defence witnesses during trial testimonies and depositions in second-hand smoke cases and in parallel, to review criticisms of epidemiology in industry-funded publications in order to identify strategies for discrediting epidemiologic evidence on passive smoking health effects. A collection of depositions and trial testimony transcripts from tobacco industry-related lawsuits filed in the United States during the 1990s, was compiled and indexed by the Tobacco Deposition and Trial Testimony Archive (DATTA). Statements in DATTA made by expert witnesses representing the tobacco industry relating to the health effects of passive smoking were identified and reviewed. Industry-supported publications within the peer-reviewed literature were also examined for statements on exposure misclassification, meta-analysis, and confounding. The witnesses challenged causation of adverse health effects of passive smoking by citing limitations of epidemiologic research, raising methodological and statistical issues, and disputing biological plausibility. Though not often cited directly by the witnesses, the defence tactics mirrored the strategies used in industry-funded reports in the peer-reviewed literature. The tobacco industry attempted to redirect the focus and dialogue related to the epidemiologic evidence on passive smoking. This approach, used by industry experts in trial testimony and depositions, placed bias as a certain alternative to causation of diseases related to passive smoking and proposed an unachievable standard for establishing the mechanism of disease.

  12. Strategies for Rapid in vivo 1H and hyperpolarized 13C MR Spectroscopic Imaging

    PubMed Central

    Nelson, Sarah J.; Ozhinsky, Eugene; Li, Yan; Park, Il woo; Crane, Jason

    2013-01-01

    In vivo MRSI is an important imaging modality that has been shown in numerous research studies to give biologically relevant information for assessing the underlying mechanisms of disease and for monitoring response to therapy. The increasing availability of high field scanners and multichannel radiofrequency coils has provided the opportunity to acquire in vivo data with significant improvements in sensitivity and signal to noise ratio. These capabilities may be used to shorten acquisition time and provide increase coverage. The ability to acquire rapid, volumetric MRSI data is critical for examining heterogeneity in metabolic profiles and for relating serial changes in metabolism within the same individual during the course of the disease. In this review we discuss the implementation of strategies that use alternative k-space sampling trajectories and parallel imaging methods in order to speed up data acquisition. The impact of such methods is demonstrated using three recent examples of how these methods have been applied. These are to the acquisition of robust 3D 1H MRSI data within 5 –10 minutes at a field strength of 3T, to obtaining higher sensitivity for 1H MRSI at 7T and to using ultrafast volumetric and dynamic 13C MRSI for monitoring the changes in signals that occur following the injection of hyperpolarized 13C agents. PMID:23453759

  13. Targeted drug delivery for cancer therapy: the other side of antibodies

    PubMed Central

    2012-01-01

    Therapeutic monoclonal antibody (TMA) based therapies for cancer have advanced significantly over the past two decades both in their molecular sophistication and clinical efficacy. Initial development efforts focused mainly on humanizing the antibody protein to overcome problems of immunogenicity and on expanding of the target antigen repertoire. In parallel to naked TMAs, antibody-drug conjugates (ADCs) have been developed for targeted delivery of potent anti-cancer drugs with the aim of bypassing the morbidity common to conventional chemotherapy. This paper first presents a review of TMAs and ADCs approved for clinical use by the FDA and those in development, focusing on hematological malignancies. Despite advances in these areas, both TMAs and ADCs still carry limitations and we highlight the more important ones including cancer cell specificity, conjugation chemistry, tumor penetration, product heterogeneity and manufacturing issues. In view of the recognized importance of targeted drug delivery strategies for cancer therapy, we discuss the advantages of alternative drug carriers and where these should be applied, focusing on peptide-drug conjugates (PDCs), particularly those discovered through combinatorial peptide libraries. By defining the advantages and disadvantages of naked TMAs, ADCs and PDCs it should be possible to develop a more rational approach to the application of targeted drug delivery strategies in different situations and ultimately, to a broader basket of more effective therapies for cancer patients. PMID:23140144

  14. Advanced propulsion system concept for hybrid vehicles

    NASA Technical Reports Server (NTRS)

    Bhate, S.; Chen, H.; Dochat, G.

    1980-01-01

    A series hybrid system, utilizing a free piston Stirling engine with a linear alternator, and a parallel hybrid system, incorporating a kinematic Stirling engine, are analyzed for various specified reference missions/vehicles ranging from a small two passenger commuter vehicle to a van. Parametric studies for each configuration, detail tradeoff studies to determine engine, battery and system definition, short term energy storage evaluation, and detail life cycle cost studies were performed. Results indicate that the selection of a parallel Stirling engine/electric, hybrid propulsion system can significantly reduce petroleum consumption by 70 percent over present conventional vehicles.

  15. The Future Combat System: Minimizing Risk While Maximizing Capability

    DTIC Science & Technology

    2000-05-01

    ec /W hl Co nv /T ra ck Co nv /W hl El ec /T rac El ec /W hl Crew &Misc Power Mgt Propulsion Lethality Structure /Surviv Conv / ETC Lethality Missile...also examines the wheeled versus tracked debate. The paper concludes by recommending some of the technologies for further development under a parallel...versus tracked debate. The paper concludes by recommending some of the technologies for further development under a parallel acquisition strategy

  16. A Parallel Workload Model and its Implications for Processor Allocation

    DTIC Science & Technology

    1996-11-01

    with SEV or AVG, both of which can tolerate c = 0.4 { 0.6 before their performance deteriorates signi cantly. On the other hand, Setia [10] has...Sanjeev. K Setia . The interaction between memory allocation and adaptive partitioning in message-passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [11] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor

  17. Implementation of a Fully-Balanced Periodic Tridiagonal Solver on a Parallel Distributed Memory Architecture

    DTIC Science & Technology

    1994-05-01

    PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular

  18. The ecology of population dispersal: Modeling alternative basin-plateau foraging strategies to explain the Numic expansion.

    PubMed

    Magargal, Kate E; Parker, Ashley K; Vernon, Kenneth Blake; Rath, Will; Codding, Brian F

    2017-07-08

    The expansion of Numic speaking populations into the Great Basin required individuals to adapt to a relatively unproductive landscape. Researchers have proposed numerous social and subsistence strategies to explain how and why these settlers were able to replace any established populations, including private property and intensive plant processing. Here we evaluate these hypotheses and propose a new strategy involving the use of landscape fire to increase resource encounter rates. Implementing a novel, spatially explicit, multi-scalar prey choice model, we examine how individual decisions approximating each alternative strategy (private property, anthropogenic fire, and intensive plant processing) would aggregate at the patch and band level to confer an overall benefit to this colonizing population. Analysis relies on experimental data reporting resource profitability and abundance, ecological data on the historic distribution of vegetation patches, and ethnohistoric data on the distribution of Numic bands. Model results show that while resource privatization and landscape fires produce a substantial advantage, intensified plant processing garners the greatest benefit. The relative benefits of alternative strategies vary significantly across ecological patches resulting in variation across ethnographic band ranges. Combined, a Numic strategy including all three alternatives would substantially increase subsistence yields. The application of a strategy set that includes landscape fire, privatization and intensified processing of seeds and nuts, explains why the Numa were able to outcompete local populations. This approach provides a framework to help explain how individual decisions can result in such population replacement events throughout human history. © 2017 Wiley Periodicals, Inc.

  19. Business model for sensor-based fall recognition systems.

    PubMed

    Fachinger, Uwe; Schöpke, Birte

    2014-01-01

    AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.

  20. Sampling Designs in Qualitative Research: Making the Sampling Process More Public

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Leech, Nancy L.

    2007-01-01

    The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…

  1. Scientific Writing: Strategies and Tools for Students and Advisors

    ERIC Educational Resources Information Center

    Singh, Vikash; Mayer, Philipp

    2014-01-01

    Scientific writing is a demanding task and many students need more time than expected to finish their research articles. To speed up the process, we highlight some tools, strategies as well as writing guides. We recommend starting early in the research process with writing and to prepare research articles, not after but in parallel to the lab or…

  2. Teachers' Report of Strategies Used to Facilitate Language Development in Students with Hearing Loss

    ERIC Educational Resources Information Center

    Handley, Candace Michele

    2013-01-01

    The purpose of this study was to identify the extent to which teachers of the deaf report using four identified language facilitation strategies: recasting, extension, responsivity, and self-talk/parallel talk. Participants self-selected in response to an advertisement on a state-wide listserv and to the state's residential school internal news.…

  3. Nine Days to Oder: An Alternate NATO Strategy for Central Region, Europe.

    DTIC Science & Technology

    1980-06-01

    in the formulation of the air campaign plans. To all these colleagues in the profession of arms, the authors offer their thanks. ii TABLE OF CONTENTS ...Risk Assessment ...................... 144 iv TABLE OF CONTENTS CONTINUED Page XI. THE COST OF THE ALTERNATE STRATEGY ... ........... ... 149 XII. PACT...Encyclopedia of Military History, p. 261. 2. Basil Liddell Hart, The Sword and the Pen, p. 319. 3. Ibid., p. 318. 3 CHAPTER II SOVIET STRATEGY AND PACT

  4. Constructing the effect of alternative intervention strategies on historic epidemics.

    PubMed

    Cook, A R; Gibson, G J; Gottwald, T R; Gilligan, C A

    2008-10-06

    Data from historical epidemics provide a vital and sometimes under-used resource from which to devise strategies for future control of disease. Previous methods for retrospective analysis of epidemics, in which alternative interventions are compared, do not make full use of the information; by using only partial information on the historical trajectory, augmentation of control may lead to predictions of a paradoxical increase in disease. Here we introduce a novel statistical approach that takes full account of the available information in constructing the effect of alternative intervention strategies in historic epidemics. The key to the method lies in identifying a suitable mapping between the historic and notional outbreaks, under alternative control strategies. We do this by using the Sellke construction as a latent process linking epidemics. We illustrate the application of the method with two examples. First, using temporal data for the common human cold, we show the improvement under the new method in the precision of predictions for different control strategies. Second, we show the generality of the method for retrospective analysis of epidemics by applying it to a spatially extended arboreal epidemic in which we demonstrate the relative effectiveness of host culling strategies that differ in frequency and spatial extent. Some of the inferential and philosophical issues that arise are discussed along with the scope of potential application of the new method.

  5. Telemetry with an Optical Fiber Revisited: An Alternative Strategy

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2014-01-01

    With a new data-acquisition system developed by PASCO scientific, an experiment on telemetry with an optical fiber can be made easier and more accurate. For this aim, an alternative strategy of the remote temperature measurements is proposed: the frequency of light pulses transmitted via the light guide numerically equals the temperature using…

  6. 76 FR 10530 - Supplemental Proposed Rule of Source Specific Federal Implementation Plan for Implementing Best...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-25

    ... proposed post-control BART limit of 0.012 lb/MMBtu on Units 1-3. C. Modeling and Demonstrating Reasonable... a different alternative emissions control strategy would achieve more progress than EPA's BART... Background for Proposing To Approve an Alternative Emissions Control Strategy as Achieving Better Progress...

  7. 77 FR 47361 - Proposed Information Collection; Comment Request; 2013 Alternative Contact Strategy Test

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-08

    ... research will be conducted through a series of projects and tests throughout the decade. Contact involving... 2020 Research and Testing Project tests and design options for the 2020 Census. II. Method of... Alternative Contact Strategy Test is the first test to support this research. The Census Bureau will test...

  8. Supporting Alternative Strategies for Learning Chemical Applications of Group Theory

    ERIC Educational Resources Information Center

    Southam, Daniel C.; Lewis, Jennifer E.

    2013-01-01

    A group theory course for chemists was taught entirely with process oriented guided inquiry learning (POGIL) to facilitate alternative strategies for learning. Students completed a test of one aspect of visuospatial aptitude to determine their individual approaches to solving spatial tasks, and were sorted into groups for analysis on the basis of…

  9. Alternative Teaching Strategies; Helping Behaviorally Troubled Children Achieve. A Guide for Teachers and Psychologists.

    ERIC Educational Resources Information Center

    Swift, Marshall S.; Spivack, George

    This book provides (1) specific information about overt classroom behaviors that affect or reflect academic success or failure, and (2) information and suggestions about alternative teaching strategies that may be used to increase behavioral effectiveness and subsequent academic achievement. The focus of the book is on specific behaviors, behavior…

  10. Social Competence and Promoting Alternative Thinking Strategies--PATHS Preschool Curriculum

    ERIC Educational Resources Information Center

    Arda, Tugce Burcu; Ocak, Sakire

    2012-01-01

    In this study, it was aimed to evaluate the effects of Promoting Alternative Thinking Strategies (PATHS)--Preschool Curriculum on Preschool Children's Social Skills. The six years old children (N = 95) and their teachers (N =7) were included in participant group in Izmir. With a pretest-intervention-posttest design, data was collected through…

  11. Socioeconomic evaluation of broad-scale land management strategies.

    Treesearch

    Lisa K. Crone; Richard W. Haynes

    2001-01-01

    This paper examines the socioeconomic effects of alternative management strategies for Forest Service and Bureau of Land Management lands in the interior Columbia basin. From a broad-scale perspective, there is little impact or variation between alternatives in terms of changes in total economic activity or social conditions in the region. However, adopting a finer...

  12. Promoting Alternative Thinking Strategies (PATHS): Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Humphrey, Neil; Barlow, Alexandra; Wigelsworth, Michael; Lendrum, Ann; Pert, Kirsty; Joyce, Craig; Stephens, Emma; Wo, Lawrence; Squires, Garry; Woods, Kevin; Calam, Rachel; Harrison, Mark; Turner, Alex; Humphrey, Neil

    2015-01-01

    Promoting Alternative Thinking Strategies (PATHS) is a school-based social and emotional learning (SEL) curriculum that aims to help children in primary school manage their behaviour, understand their emotions, and work well with others. PATHS consists of a series of lessons that cover topics such as identifying and labelling feelings, controlling…

  13. Read Across Approaches: Chemical Structure and Bioactivity ...

    EPA Pesticide Factsheets

    Presentation for FDA-CFSAN and ILSI workshop on Workshop on State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments Presentation for FDA-CFSAN and ILSI workshop on Workshop: State of the Science on Alternatives to Animal Testing and Integration of Testing Strategies for Food Safety Assessments

  14. Alternative IT Sourcing Strategies: Six Views

    ERIC Educational Resources Information Center

    Mahon, Ed; McPherson, Michael R.; Vaughan, Joseph; Rowe, Theresa; Pickett, Michael P.; Bielec, John A.

    2011-01-01

    IT leaders today must not only provide but also decide: which tools and services should they continue to supply, which are better delivered by others, and perhaps most critically, which methods from among the bewildering array of alternative sourcing strategies will best serve their faculty, staff, and students. In 2009, the EDUCAUSE Center for…

  15. The impact of two multiple-choice question formats on the problem-solving strategies used by novices and experts.

    PubMed

    Coderre, Sylvain P; Harasym, Peter; Mandin, Henry; Fick, Gordon

    2004-11-05

    Pencil-and-paper examination formats, and specifically the standard, five-option multiple-choice question, have often been questioned as a means for assessing higher-order clinical reasoning or problem solving. This study firstly investigated whether two paper formats with differing number of alternatives (standard five-option and extended-matching questions) can test problem-solving abilities. Secondly, the impact of the alternatives number on psychometrics and problem-solving strategies was examined. Think-aloud protocols were collected to determine the problem-solving strategy used by experts and non-experts in answering Gastroenterology questions, across the two pencil-and-paper formats. The two formats demonstrated equal ability in testing problem-solving abilities, while the number of alternatives did not significantly impact psychometrics or problem-solving strategies utilized. These results support the notion that well-constructed multiple-choice questions can in fact test higher order clinical reasoning. Furthermore, it can be concluded that in testing clinical reasoning, the question stem, or content, remains more important than the number of alternatives.

  16. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  17. [Optimizing carbon/energy metabolism to enhance monellin production by Pichia pastoris].

    PubMed

    Huai, Qiangqiang; Jia, Luqiang; Ding, Jian; Chen, Shanshan; Sun, Jiaowen; Shi, Zhongping

    2018-02-25

    In heterologous protein productions by Pichia pastoris, methanol induction is generally initiated when cell density reaches very high level. However, this traditional strategy suffers with the problems of difficulty in DO control, toxic by-metabolites accumulation and low targeted protein titer. Therefore, initiating methanol induction at lower cell concentration is considered as an alternative strategy to overcome those problems. However, the methanol/energy regulation mechanisms of initiating induction at lower concentration are not clear and seldom reported. In this article, with monellin production as a prototype, we analyzed the methanol/energy metabolism in protein expression process using the strategies of initiating induction at both higher/lower cells concentrations. We attempted to interpret the advantages of the "alternative" strategy, via online measurements of methanol consumption, CO₂ production and O₂ uptake rates. When adopting this "alternative" strategy and maintaining temperature at 30 °C, carbon flux ratio directing into monellin precursors synthesis reached the highest level of 65%. In addition, monellin synthesis was completely associated with cell growth.

  18. Wichita Mountains Wildlife Refuge - Comprehensive Alternative Transportation Plan

    DOT National Transportation Integrated Search

    2014-05-01

    The Comprehensive Alternative Transportation Plan for Wichita Mountains Wildlife Refuge in southwestern Oklahoma analyzes a range of transportation and resource management challenges and documents a holistic set of alternative transportation strategi...

  19. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  20. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  1. State-of-the-art robotic devices for ankle rehabilitation: Mechanism and control review.

    PubMed

    Hussain, Shahid; Jamwal, Prashant K; Ghayesh, Mergen H

    2017-12-01

    There is an increasing research interest in exploring use of robotic devices for the physical therapy of patients suffering from stroke and spinal cord injuries. Rehabilitation of patients suffering from ankle joint dysfunctions such as drop foot is vital and therefore has called for the development of newer robotic devices. Several robotic orthoses and parallel ankle robots have been developed during the last two decades to augment the conventional ankle physical therapy of patients. A comprehensive review of these robotic ankle rehabilitation devices is presented in this article. Recent developments in the mechanism design, actuation and control are discussed. The study encompasses robotic devices for treadmill and over-ground training as well as platform-based parallel ankle robots. Control strategies for these robotic devices are deliberated in detail with an emphasis on the assist-as-needed training strategies. Experimental evaluations of the mechanism designs and various control strategies of these robotic ankle rehabilitation devices are also presented.

  2. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  3. The fast and the slow of skilled bimanual rhythm production: parallel versus integrated timing.

    PubMed

    Krampe, R T; Kliegl, R; Mayr, U; Engbert, R; Vorberg, D

    2000-02-01

    Professional pianists performed 2 bimanual rhythms at a wide range of different tempos. The polyrhythmic task required the combination of 2 isochronous sequences (3 against 4) between the hands; in the syncopated rhythm task successive keystrokes formed intervals of identical (isochronous) durations. At slower tempos, pianists relied on integrated timing control merging successive intervals between the hands into a common reference frame. A timer-motor model is proposed based on the concepts of rate fluctuation and the distinction between target specification and timekeeper execution processes as a quantitative account of performance at slow tempos. At rapid rates expert pianists used hand-independent, parallel timing control. In alternative to a model based on a single central clock, findings support a model of flexible control structures with multiple timekeepers that can work in parallel to accommodate specific task constraints.

  4. Sensor-scheduling simulation of disparate sensors for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Hobson, T.; Clarkson, I.

    2011-09-01

    The art and science of space situational awareness (SSA) has been practised and developed from the time of Sputnik. However, recent developments, such as the accelerating pace of satellite launch, the proliferation of launch capable agencies, both commercial and sovereign, and recent well-publicised collisions involving man-made space objects, has further magnified the importance of timely and accurate SSA. The United States Strategic Command (USSTRATCOM) operates the Space Surveillance Network (SSN), a global network of sensors tasked with maintaining SSA. The rapidly increasing number of resident space objects will require commensurate improvements in the SSN. Sensors are scarce resources that must be scheduled judiciously to obtain measurements of maximum utility. Improvements in sensor scheduling and fusion, can serve to reduce the number of additional sensors that may be required. Recently, Hill et al. [1] have proposed and developed a simulation environment named TASMAN (Tasking Autonomous Sensors in a Multiple Application Network) to enable testing of alternative scheduling strategies within a simulated multi-sensor, multi-target environment. TASMAN simulates a high-fidelity, hardware-in-the-loop system by running multiple machines with different roles in parallel. At present, TASMAN is limited to simulations involving electro-optic sensors. Its high fidelity is at once a feature and a limitation, since supercomputing is required to run simulations of appreciable scale. In this paper, we describe an alternative, modular and scalable SSA simulation system that can extend the work of Hill et al with reduced complexity, albeit also with reduced fidelity. The tool has been developed in MATLAB and therefore can be run on a very wide range of computing platforms. It can also make use of MATLAB’s parallel processing capabilities to obtain considerable speed-up. The speed and flexibility so obtained can be used to quickly test scheduling algorithms even with a relatively large number of space objects. We further describe an application of the tool by exploring how the relative mixture of electro-optical and radar sensors can impact the scheduling, fusion and achievable accuracy of an SSA system. By varying the mixture of sensor types, we are able to characterise the main advantages and disadvantages of each configuration.

  5. Video prompting versus other instruction strategies for persons with Alzheimer's disease.

    PubMed

    Perilli, Viviana; Lancioni, Giulio E; Hoogeveen, Frans; Caffó, Alessandro; Singh, Nirbhay; O'Reilly, Mark; Sigafoos, Jeff; Cassano, Germana; Oliva, Doretta

    2013-06-01

    Two studies assessed the effectiveness of video prompting as a strategy to support persons with mild and moderate Alzheimer's disease in performing daily activities. In study I, video prompting was compared to an existing strategy relying on verbal instructions. In study II, video prompting was compared to another existing strategy relying on static pictorial cues. Video prompting and the other strategies were counterbalanced across tasks and participants and compared within alternating treatments designs. Video prompting was effective in all participants. Similarly effective were the other 2 strategies, and only occasional differences between the strategies were reported. Two social validation assessments showed that university psychology students and graduates rated the patients' performance with video prompting more favorably than their performance with the other strategies. Video prompting may be considered a valuable alternative to the other strategies to support daily activities in persons with Alzheimer's disease.

  6. A Short-Circuit Method for Networks.

    ERIC Educational Resources Information Center

    Ong, P. P.

    1983-01-01

    Describes a method of network analysis that allows avoidance of Kirchoff's Laws (providing the network is symmetrical) by reduction to simple series/parallel resistances. The method can be extended to symmetrical alternating current, capacitance or inductance if corresponding theorems are used. Symmetric cubic network serves as an example. (JM)

  7. Searching for an Axis-Parallel Shoreline

    NASA Astrophysics Data System (ADS)

    Langetepe, Elmar

    We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.

  8. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  9. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  10. Toxicity Testing in the 21st Century: Defining New Risk Assessment Approaches Based on Perturbation of Intracellular Toxicity Pathways

    PubMed Central

    Bhattacharya, Sudin; Zhang, Qiang; Carmichael, Paul L.; Boekelheide, Kim; Andersen, Melvin E.

    2011-01-01

    The approaches to quantitatively assessing the health risks of chemical exposure have not changed appreciably in the past 50 to 80 years, the focus remaining on high-dose studies that measure adverse outcomes in homogeneous animal populations. This expensive, low-throughput approach relies on conservative extrapolations to relate animal studies to much lower-dose human exposures and is of questionable relevance to predicting risks to humans at their typical low exposures. It makes little use of a mechanistic understanding of the mode of action by which chemicals perturb biological processes in human cells and tissues. An alternative vision, proposed by the U.S. National Research Council (NRC) report Toxicity Testing in the 21st Century: A Vision and a Strategy, called for moving away from traditional high-dose animal studies to an approach based on perturbation of cellular responses using well-designed in vitro assays. Central to this vision are (a) “toxicity pathways” (the innate cellular pathways that may be perturbed by chemicals) and (b) the determination of chemical concentration ranges where those perturbations are likely to be excessive, thereby leading to adverse health effects if present for a prolonged duration in an intact organism. In this paper we briefly review the original NRC report and responses to that report over the past 3 years, and discuss how the change in testing might be achieved in the U.S. and in the European Union (EU). EU initiatives in developing alternatives to animal testing of cosmetic ingredients have run very much in parallel with the NRC report. Moving from current practice to the NRC vision would require using prototype toxicity pathways to develop case studies showing the new vision in action. In this vein, we also discuss how the proposed strategy for toxicity testing might be applied to the toxicity pathways associated with DNA damage and repair. PMID:21701582

  11. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  12. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  13. IARC classes 1 and 2 carcinogens are successfully identified by an alternative strategy that detects DNA-reactivity and cell transformation ability of chemicals.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Battistelli, Chiara Laura; Tcheremenskaia, Olga

    2013-12-12

    For decades, traditional toxicology has been the ultimate source of information on the carcinogenic potential of chemicals; however with increasing demand on regulation of chemicals and decreasing resources for testing, opportunities to accept "alternative" approaches have dramatically expanded. The need for tools able to identify carcinogens in shorter times and at a lower cost in terms of animal lives and money is still an open issue, and the present strategies and regulations for carcinogenicity pre-screening do not adequately protect human health. In previous papers, we have proposed an integrated in vitro/in silico strategy that detects DNA-reactivity and tissue disorganization/disruption by chemicals, and we have shown that the combination of Salmonella and Structural Alerts for the DNA-reactive carcinogens, and in vitro cell transformation assays for nongenotoxic carcinogens permits the identification of a very large proportion (up to 95%) of rodent carcinogens, while having a considerable specificity with the rodent noncarcinogens. In the present paper we expand the previous investigation and show that this alternative strategy identifies correctly IARC Classes 1 and 2 carcinogens. If implemented, this alternative strategy can contribute to improve the protection of human health while decreasing the use of animals. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. The pattern of parallel edge plasma flows due to pressure gradients, recycling, and resonant magnetic perturbations in DIII-D

    DOE PAGES

    Frerichs, H.; Schmitz, Oliver; Evans, Todd; ...

    2015-07-13

    High resolution plasma transport simulations with the EMC3-EIRENE code have been performed to address the parallel plasma flow structure in the boundary of a poloidal divertor configuration with non-axisymmetric perturbations at DIII-D. Simulation results show that a checkerboard pattern of flows with alternating direction is generated inside the separatrix. This pattern is aligned with the position of the main resonances (i.e. where the safety factor is equal to rational values q = m/n for a perturbation field with base mode number n): m pairs of alternating forward and backward flow channel exist for each resonance. The poloidal oscillations are alignedmore » with the subharmonic Melnikov function, which indicates that the plasma flow is generated by parallel pressure gradients along perturbed field lines. Lastly, an additional scrape-off layer-like domain is introduced by the perturbed separatrix which guides field lines from the interior to the divertor targets, resulting in an enhanced outward flow that is consistent with the experimentally observed particle pump-out effect. However, while the lobe structure of the perturbed separatrix is very well reflected in the temperature profile, the same lobes can appear to be smaller in the flow profile due to a competition between high upstream pressure and downstream particle sources driving flows in opposite directions.« less

  15. Ecology of Fungus Gnats (Bradysia spp.) in Greenhouse Production Systems Associated with Disease-Interactions and Alternative Management Strategies.

    PubMed

    Cloyd, Raymond A

    2015-04-09

    Fungus gnats (Bradysia spp.) are major insect pests of greenhouse-grown horticultural crops mainly due to the direct feeding damage caused by the larvae, and the ability of larvae to transmit certain soil-borne plant pathogens. Currently, insecticides and biological control agents are being used successively to deal with fungus gnat populations in greenhouse production systems. However, these strategies may only be effective as long as greenhouse producers also implement alternative management strategies such as cultural, physical, and sanitation. This includes elimination of algae, and plant and growing medium debris; placing physical barriers onto the growing medium surface; and using materials that repel fungus gnat adults. This article describes the disease-interactions associated with fungus gnats and foliar and soil-borne diseases, and the alternative management strategies that should be considered by greenhouse producers in order to alleviate problems with fungus gnats in greenhouse production systems.

  16. Ecology of Fungus Gnats (Bradysia spp.) in Greenhouse Production Systems Associated with Disease-Interactions and Alternative Management Strategies

    PubMed Central

    Cloyd, Raymond A.

    2015-01-01

    Fungus gnats (Bradysia spp.) are major insect pests of greenhouse-grown horticultural crops mainly due to the direct feeding damage caused by the larvae, and the ability of larvae to transmit certain soil-borne plant pathogens. Currently, insecticides and biological control agents are being used successively to deal with fungus gnat populations in greenhouse production systems. However, these strategies may only be effective as long as greenhouse producers also implement alternative management strategies such as cultural, physical, and sanitation. This includes elimination of algae, and plant and growing medium debris; placing physical barriers onto the growing medium surface; and using materials that repel fungus gnat adults. This article describes the disease-interactions associated with fungus gnats and foliar and soil-borne diseases, and the alternative management strategies that should be considered by greenhouse producers in order to alleviate problems with fungus gnats in greenhouse production systems. PMID:26463188

  17. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  18. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  19. Modulation of human dermal microvascular endothelial cell and human gingival fibroblast behavior by micropatterned silica coating surfaces for zirconia dental implant applications

    PubMed Central

    Laranjeira, Marta S; Carvalho, Ângela; Pelaez-Vargas, Alejandro; Hansford, Derek; Ferraz, Maria Pia; Coimbra, Susana; Costa, Elísio; Santos-Silva, Alice; Fernandes, Maria Helena; Monteiro, Fernando Jorge

    2014-01-01

    Dental ceramic implants have shown superior esthetic behavior and the absence of induced allergic disorders when compared to titanium implants. Zirconia may become a potential candidate to be used as an alternative to titanium dental implants if surface modifications are introduced. In this work, bioactive micropatterned silica coatings were produced on zirconia substrates, using a combined methodology of sol–gel processing and soft lithography. The aim of the work was to compare the in vitro behavior of human gingival fibroblasts (HGFs) and human dermal microvascular endothelial cells (HDMECs) on three types of silica-coated zirconia surfaces: flat and micropatterned (with pillars and with parallel grooves). Our results showed that cells had a higher metabolic activity (HGF, HDMEC) and increased gene expression levels of fibroblast-specific protein-1 (FSP-1) and collagen type I (COL I) on surfaces with pillars. Nevertheless, parallel grooved surfaces were able to guide cell growth. Even capillary tube-like networks of HDMEC were oriented according to the surface geometry. Zirconia and silica with different topographies have shown to be blood compatible and silica coating reduced bacteria adhesion. All together, the results indicated that microstructured bioactive coating seems to be an efficient strategy to improve soft tissue integration on zirconia implants, protecting implants from peri-implant inflammation and improving long-term implant stabilization. This new approach of micropatterned silica coating on zirconia substrates can generate promising novel dental implants, with surfaces that provide physical cues to guide cells and enhance their behavior. PMID:27877662

  20. Transcriptional analysis of the Arabidopsis ovule by massively parallel signature sequencing

    PubMed Central

    Sánchez-León, Nidia; Arteaga-Vázquez, Mario; Alvarez-Mejía, César; Mendiola-Soto, Javier; Durán-Figueroa, Noé; Rodríguez-Leal, Daniel; Rodríguez-Arévalo, Isaac; García-Campayo, Vicenta; García-Aguilar, Marcelina; Olmedo-Monfil, Vianey; Arteaga-Sánchez, Mario; Martínez de la Vega, Octavio; Nobuta, Kan; Vemaraju, Kalyan; Meyers, Blake C.; Vielle-Calzada, Jean-Philippe

    2012-01-01

    The life cycle of flowering plants alternates between a predominant sporophytic (diploid) and an ephemeral gametophytic (haploid) generation that only occurs in reproductive organs. In Arabidopsis thaliana, the female gametophyte is deeply embedded within the ovule, complicating the study of the genetic and molecular interactions involved in the sporophytic to gametophytic transition. Massively parallel signature sequencing (MPSS) was used to conduct a quantitative large-scale transcriptional analysis of the fully differentiated Arabidopsis ovule prior to fertilization. The expression of 9775 genes was quantified in wild-type ovules, additionally detecting >2200 new transcripts mapping to antisense or intergenic regions. A quantitative comparison of global expression in wild-type and sporocyteless (spl) individuals resulted in 1301 genes showing 25-fold reduced or null activity in ovules lacking a female gametophyte, including those encoding 92 signalling proteins, 75 transcription factors, and 72 RNA-binding proteins not reported in previous studies based on microarray profiling. A combination of independent genetic and molecular strategies confirmed the differential expression of 28 of them, showing that they are either preferentially active in the female gametophyte, or dependent on the presence of a female gametophyte to be expressed in sporophytic cells of the ovule. Among 18 genes encoding pentatricopeptide-repeat proteins (PPRs) that show transcriptional activity in wild-type but not spl ovules, CIHUATEOTL (At4g38150) is specifically expressed in the female gametophyte and necessary for female gametogenesis. These results expand the nature of the transcriptional universe present in the ovule of Arabidopsis, and offer a large-scale quantitative reference of global expression for future genomic and developmental studies. PMID:22442422

  1. Transcriptional analysis of the Arabidopsis ovule by massively parallel signature sequencing.

    PubMed

    Sánchez-León, Nidia; Arteaga-Vázquez, Mario; Alvarez-Mejía, César; Mendiola-Soto, Javier; Durán-Figueroa, Noé; Rodríguez-Leal, Daniel; Rodríguez-Arévalo, Isaac; García-Campayo, Vicenta; García-Aguilar, Marcelina; Olmedo-Monfil, Vianey; Arteaga-Sánchez, Mario; de la Vega, Octavio Martínez; Nobuta, Kan; Vemaraju, Kalyan; Meyers, Blake C; Vielle-Calzada, Jean-Philippe

    2012-06-01

    The life cycle of flowering plants alternates between a predominant sporophytic (diploid) and an ephemeral gametophytic (haploid) generation that only occurs in reproductive organs. In Arabidopsis thaliana, the female gametophyte is deeply embedded within the ovule, complicating the study of the genetic and molecular interactions involved in the sporophytic to gametophytic transition. Massively parallel signature sequencing (MPSS) was used to conduct a quantitative large-scale transcriptional analysis of the fully differentiated Arabidopsis ovule prior to fertilization. The expression of 9775 genes was quantified in wild-type ovules, additionally detecting >2200 new transcripts mapping to antisense or intergenic regions. A quantitative comparison of global expression in wild-type and sporocyteless (spl) individuals resulted in 1301 genes showing 25-fold reduced or null activity in ovules lacking a female gametophyte, including those encoding 92 signalling proteins, 75 transcription factors, and 72 RNA-binding proteins not reported in previous studies based on microarray profiling. A combination of independent genetic and molecular strategies confirmed the differential expression of 28 of them, showing that they are either preferentially active in the female gametophyte, or dependent on the presence of a female gametophyte to be expressed in sporophytic cells of the ovule. Among 18 genes encoding pentatricopeptide-repeat proteins (PPRs) that show transcriptional activity in wild-type but not spl ovules, CIHUATEOTL (At4g38150) is specifically expressed in the female gametophyte and necessary for female gametogenesis. These results expand the nature of the transcriptional universe present in the ovule of Arabidopsis, and offer a large-scale quantitative reference of global expression for future genomic and developmental studies.

  2. Modulation of human dermal microvascular endothelial cell and human gingival fibroblast behavior by micropatterned silica coating surfaces for zirconia dental implant applications

    NASA Astrophysics Data System (ADS)

    Laranjeira, Marta S.; Carvalho, Ângela; Pelaez-Vargas, Alejandro; Hansford, Derek; Ferraz, Maria Pia; Coimbra, Susana; Costa, Elísio; Santos-Silva, Alice; Fernandes, Maria Helena; Monteiro, Fernando Jorge

    2014-04-01

    Dental ceramic implants have shown superior esthetic behavior and the absence of induced allergic disorders when compared to titanium implants. Zirconia may become a potential candidate to be used as an alternative to titanium dental implants if surface modifications are introduced. In this work, bioactive micropatterned silica coatings were produced on zirconia substrates, using a combined methodology of sol-gel processing and soft lithography. The aim of the work was to compare the in vitro behavior of human gingival fibroblasts (HGFs) and human dermal microvascular endothelial cells (HDMECs) on three types of silica-coated zirconia surfaces: flat and micropatterned (with pillars and with parallel grooves). Our results showed that cells had a higher metabolic activity (HGF, HDMEC) and increased gene expression levels of fibroblast-specific protein-1 (FSP-1) and collagen type I (COL I) on surfaces with pillars. Nevertheless, parallel grooved surfaces were able to guide cell growth. Even capillary tube-like networks of HDMEC were oriented according to the surface geometry. Zirconia and silica with different topographies have shown to be blood compatible and silica coating reduced bacteria adhesion. All together, the results indicated that microstructured bioactive coating seems to be an efficient strategy to improve soft tissue integration on zirconia implants, protecting implants from peri-implant inflammation and improving long-term implant stabilization. This new approach of micropatterned silica coating on zirconia substrates can generate promising novel dental implants, with surfaces that provide physical cues to guide cells and enhance their behavior.

  3. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  4. Recent advances in applying mass spectrometry and systems biology to determine brain dynamics.

    PubMed

    Scifo, Enzo; Calza, Giulio; Fuhrmann, Martin; Soliymani, Rabah; Baumann, Marc; Lalowski, Maciej

    2017-06-01

    Neurological disorders encompass various pathologies which disrupt normal brain physiology and function. Poor understanding of their underlying molecular mechanisms and their societal burden argues for the necessity of novel prevention strategies, early diagnostic techniques and alternative treatment options to reduce the scale of their expected increase. Areas covered: This review scrutinizes mass spectrometry based approaches used to investigate brain dynamics in various conditions, including neurodegenerative and neuropsychiatric disorders. Different proteomics workflows for isolation/enrichment of specific cell populations or brain regions, sample processing; mass spectrometry technologies, for differential proteome quantitation, analysis of post-translational modifications and imaging approaches in the brain are critically deliberated. Future directions, including analysis of cellular sub-compartments, targeted MS platforms (selected/parallel reaction monitoring) and use of mass cytometry are also discussed. Expert commentary: Here, we summarize and evaluate current mass spectrometry based approaches for determining brain dynamics in health and diseases states, with a focus on neurological disorders. Furthermore, we provide insight on current trends and new MS technologies with potential to improve this analysis.

  5. Students conception and perception of simple electrical circuit

    NASA Astrophysics Data System (ADS)

    Setyani, ND; Suparmi; Sarwanto; Handhika, J.

    2017-11-01

    This research aims to describe the profile of the students’ conception and perception on the simple electrical circuit. The results of this research suppose to be used as a reference by teachers to use learning models or strategies to improve understanding the physics concept. The research method used is descriptive qualitative. Research subjects are the students of physics education program, Universitas Sebelas Maret, Surakarta, Indonesia (49 students). The results showed that students have alternative conceptions. Their conceptions are (1) a high-voltage wire has an electric current and can cause electric shock, (2) the potential difference and the value of resistance used in a circuit is influenced by electric current, (3) the value of resistance of a lamp is proportional to the filament thickness, (4) the amount of electric current that coming out from the positive pole battery is the same for all type of circuit, in series or parallel (battery is constant current sources), (5) the current at any resistor in the series circuit is influenced by the resistor used, (6) the resistor consume the current through it. This incorrect conception can cause misconceptions.

  6. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ang; Song, Shuaiwen; Brugel, Eric

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  8. LORAKS makes better SENSE: Phase-constrained partial fourier SENSE reconstruction without phase calibration.

    PubMed

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P

    2017-03-01

    Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. Possible origin and significance of extension-parallel drainages in Arizona's metamophic core complexes

    USGS Publications Warehouse

    Spencer, J.E.

    2000-01-01

    The corrugated form of the Harcuvar, South Mountains, and Catalina metamorphic core complexes in Arizona reflects the shape of the middle Tertiary extensional detachment fault that projects over each complex. Corrugation axes are approximately parallel to the fault-displacement direction and to the footwall mylonitic lineation. The core complexes are locally incised by enigmatic, linear drainages that parallel corrugation axes and the inferred extension direction and are especially conspicuous on the crests of antiformal corrugations. These drainages have been attributed to erosional incision on a freshly denuded, planar, inclined fault ramp followed by folding that elevated and preserved some drainages on the crests of rising antiforms. According to this hypothesis, corrugations were produced by folding after subacrial exposure of detachment-fault foot-walls. An alternative hypothesis, proposed here, is as follows. In a setting where preexisting drainages cross an active normal fault, each fault-slip event will cut each drainage into two segments separated by a freshly denuded fault ramp. The upper and lower drainage segments will remain hydraulically linked after each fault-slip event if the drainage in the hanging-wall block is incised, even if the stream is on the flank of an antiformal corrugation and there is a large component of strike-slip fault movement. Maintenance of hydraulic linkage during sequential fault-slip events will guide the lengthening stream down the fault ramp as the ramp is uncovered, and stream incision will form a progressively lengthening, extension-parallel, linear drainage segment. This mechanism for linear drainage genesis is compatible with corrugations as original irregularities of the detachment fault, and does not require folding after early to middle Miocene footwall exhumations. This is desirable because many drainages are incised into nonmylonitic crystalline footwall rocks that were probably not folded under low-temperature, surface conditions. An alternative hypothesis, that drainages were localized by small fault grooves as footwalls were uncovered, is not supported by analysis of a down-plunge fault projection for the southern Rincon Mountains that shows a linear drainage aligned with the crest of a small antiformal groove on the detachment fault, but this process could have been effective elsewhere. Lineation-parallel drainages now plunge gently southwestward on the southwest ends of antiformal corrugations in the South and Buckskin Mountains, but these drainages must have originally plunged northeastward if they formed by either of the two alternative processes proposed here. Footwall exhumation and incision by northeast-flowing streams was apparently followed by core-complex arching and drainage reversal.

  10. Coupling Cover Crops with Alternative Swine Manure Application Strategies: Manure-15N Tracer Studies

    USDA-ARS?s Scientific Manuscript database

    Integration of rye cover crops with alternative liquid swine (Sus scrofa L.) manure application strategies may enhance retention of manure N in corn (Zea mays L.) - soybean [Glycine max (L.) Merr] cropping systems. The objective of this study was to quantify uptake of manure derived-N by a rye (Seca...

  11. Disciplinary Practices in Schools and Principles of Alternatives to Corporal Punishment Strategies

    ERIC Educational Resources Information Center

    Moyo, George; Khewu, Noncedo P. D.; Bayaga, Anass

    2014-01-01

    The aim of the study was to determine the consistency prevailing between the disciplinary practices in the schools and the principles of the Alternatives-to-Corporal Punishment strategy. The three main research questions that guided the study were to determine (1) How much variance of offences can be explained by disciplinary measures of…

  12. Parallel overinterpretation of behavior of apes and corvids.

    PubMed

    Hampton, Robert

    2018-06-20

    The report by Kabadayi and Osvath (Science, 357(6347), 202-204, 2017) does not demonstrate planning in ravens. The behavior of corvids and apes is fascinating and will be best appreciated through well-designed experiments that explicitly test alternative explanations and that are interpreted without unjustified anthropomorphic embellishment.

  13. Student Teachers' Team Teaching during Field Experiences: An Evaluation by Their Mentors

    ERIC Educational Resources Information Center

    Simons, Mathea; Baeten, Marlies

    2016-01-01

    Since collaboration within schools gains importance, teacher educators are looking for alternative models of field experience inspired by collaborative learning. Team teaching is such a model. This study explores two team teaching models (parallel and sequential teaching) by investigating the mentors' perspective. Semi-structured interviews were…

  14. Measurements of Student and Teacher Perceptions of Co-Teaching Models

    ERIC Educational Resources Information Center

    Keeley, Randa G.

    2015-01-01

    Co-teaching is an accepted teaching model for inclusive classrooms. This study measured the perceptions of both students and teachers regarding the five most commonly used co-teaching models (i.e., One Teach/One Assist, Station Teaching, Alternative Teaching, Parallel Teaching, and Team Teaching). Additionally, this study compared student…

  15. Probability matching and strategy availability.

    PubMed

    Koehler, Derek J; James, Greta

    2010-09-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.

  16. Parallel algorithms for boundary value problems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  17. Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.

  18. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    PubMed Central

    Lu, Yimin; Zhang, Wei

    2007-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and final product purifications are performed with solid-phase extraction (SPE) over FluoroFlash™ cartridges, no chromatography is required. Using standard instruments and straightforward SPE technique, one chemist accomplished the 120-member library synthesis in less than 5 working days, including starting material synthesis and product analysis. PMID:15789556

  19. Maternal Strategies to Access Food Differ by Food Security Status.

    PubMed

    Gorman, Kathleen S; McCurdy, Karen; Kisler, Tiffani; Metallinos-Katsaras, Elizabeth

    2017-01-01

    Household food insecurity is associated with health and behavior risk. Much less is known about how food insecurity is related to strategies that adults use in accessing food: how and where they shop, use of alternative food sources, and their ability to manage resources. To examine how maternal behaviors, including shopping, accessing alternative sources of food, and managing resources, are related to household food security status (HHFSS). Cross-sectional study collecting survey data on HHFSS, shopping behaviors, use of alternative food sources, and managing resources obtained from low-income mothers of preschool-aged children. One hundred sixty-four low-income mothers of young children (55% Hispanic) from two communities in Rhode Island. HHFSS was measured using 10 items from the 18-item Core Food Security Module to assess adult food security. Mothers were surveyed about where, when, and how often they shopped; the strategies they use when shopping; their use of alternative sources of food, including federal, state, and local assistance; and their ability to manage their resources. Analysis of variance and χ 2 analyses assessed the associations between demographic variables, shopping, accessing alternative food sources, and managing resources, and HHFSS. Multivariate logistic regression assessed the associations between HHFSS and maternal demographic variables, food shopping, strategies, alternative sources of food, and ability to manage resources. Maternal age and language spoken at home were significantly associated with HHFSS; food insecurity was 10% more likely among older mothers (adjusted odds ratio [aOR] 1.10, 95% CI 1.03 to 1.17) and 2.5 times more likely among Spanish-speaking households (compared with non-Spanish speaking [aOR 3.57, 95% CI 1.25 to 10.18]). Food insecurity was more likely among mothers reporting more informal strategies (aOR 1.98, 95% CI 1.28 to 3.01; P<0.05) and perceiving greater inability to manage resources (aOR 1.60, 95% CI 1.30 to 1.98; P<0.05). The results suggest that low-income mothers use a variety of strategies to feed their families and that the strategies they use vary by HHFSS. Community nutrition programs and providers will need to consider these strategies when counseling families at risk for food insecurity and provide guidance to minimize the influence on healthy food choices. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  20. Maternal Strategies to Access Food Differ by Food Security Status

    PubMed Central

    Gorman, Kathleen S.; McCurdy, Karen; Kisler, Tiffani; Metallinos-Katsaras, Elizabeth

    2016-01-01

    Background Household food insecurity is associated with health and behavior risk. Much less is known about how food insecurity is related to strategies that adults use in accessing food: how and where they shop, use of alternative food sources and their ability to manage resources. Objective To examine how maternal behaviors including shopping, accessing alternative sources of food and managing resources are related to household food security status (HHFSS). Design Cross-sectional study collecting survey data on HHFSS, shopping behaviors, use of alternative food sources and managing resources obtained from low income mothers of preschoolers. Participants 164 low-income mothers of young children (55% Hispanic) from two communities in Rhode Island. Measures HHFSS was measured using ten items from the 18-item Core Food Security Module to assess adult food security. Mothers were surveyed about where, when and how often they shopped; the strategies they use when shopping; their use of alternative sources of food including federal, state and local assistance; and their ability to manage their resources. Statistical analyses Analysis of Variance and Chi-square analyses assessed the associations between demographic variables, shopping, accessing alternative food sources and managing resources, and HHFSS. Multivariate logistic regression assessed the associations between HHFSS and maternal demographic variables, food shopping strategies, alternative sources of food and ability to manage resources. Results Maternal age and language spoken at home were significantly associated with HHFSS; food insecurity was 10% more likely among older mothers (AOR=1.10; 95% CI 1.03-1.17) and 2.5 times more likely among Spanish speaking households (compared to non-Spanish speaking-AOR=3.57; 95% CI 1.25-10.18). Food insecurity was more likely among mothers reporting more informal strategies (AOR=1.98; 95% CI 1.28-3.01, p<.05) and perceiving greater inability to manage resources (AOR=1.60; 95% CI 1.30-1.98, p<.05). Conclusions The results suggest that low-income mothers use a variety of strategies in order to feed their families and that the strategies they use vary by HHFSS. Community nutrition programs and providers will need to consider these strategies when counseling families at risk for food insecurity and provide guidance to minimize the impact on healthy food choices. PMID:27614689

  1. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  2. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  3. Alternative strategies: a better alternative.

    PubMed

    Doody, Dennis

    2010-05-01

    Alternatives can be defined as being any financial asset other than traditional stocks and bonds. They include marketable alternatives, private capital, and equity real estate. There are two primary reasons for investing in alternatives: the potential for greater return and the opportunity to diversify a portfolio. Although alternatives were challenged in the highly volatile environment that existed in 2008 and early 2009, they generally lived up to expectations.

  4. Lessons From Recruitment to an Internet-Based Survey for Degenerative Cervical Myelopathy: Comparison of Free and Fee-Based Methods.

    PubMed

    Davies, Benjamin; Kotter, Mark

    2018-02-05

    Degenerative Cervical Myelopathy (DCM) is a syndrome of subacute cervical spinal cord compression due to spinal degeneration. Although DCM is thought to be common, many fundamental questions such as the natural history and epidemiology of DCM remain unknown. In order to answer these, access to a large cohort of patients with DCM is required. With its unrivalled and efficient reach, the Internet has become an attractive tool for medical research and may overcome these limitations in DCM. The most effective recruitment strategy, however, is unknown. To compare the efficacy of fee-based advertisement with alternative free recruitment strategies to a DCM Internet health survey. An Internet health survey (SurveyMonkey) accessed by a new DCM Internet platform (myelopathy.org) was created. Using multiple survey collectors and the website's Google Analytics, the efficacy of fee-based recruitment strategies (Google AdWords) and free alternatives (including Facebook, Twitter, and myelopathy.org) were compared. Overall, 760 surveys (513 [68%] fully completed) were accessed, 305 (40%) from fee-based strategies and 455 (60%) from free alternatives. Accounting for researcher time, fee-based strategies were more expensive ($7.8 per response compared to $3.8 per response for free alternatives) and identified a less motivated audience (Click-Through-Rate of 5% compared to 57% using free alternatives) but were more time efficient for the researcher (2 minutes per response compared to 16 minutes per response for free methods). Facebook was the most effective free strategy, providing 239 (31%) responses, where a single message to 4 existing communities yielded 133 (18%) responses within 7 days. The Internet can efficiently reach large numbers of patients. Free and fee-based recruitment strategies both have merits. Facebook communities are a rich resource for Internet researchers. ©Benjamin Davies, Mark Kotter. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 05.02.2018.

  5. Lessons From Recruitment to an Internet-Based Survey for Degenerative Cervical Myelopathy: Comparison of Free and Fee-Based Methods

    PubMed Central

    2018-01-01

    Background Degenerative Cervical Myelopathy (DCM) is a syndrome of subacute cervical spinal cord compression due to spinal degeneration. Although DCM is thought to be common, many fundamental questions such as the natural history and epidemiology of DCM remain unknown. In order to answer these, access to a large cohort of patients with DCM is required. With its unrivalled and efficient reach, the Internet has become an attractive tool for medical research and may overcome these limitations in DCM. The most effective recruitment strategy, however, is unknown. Objective To compare the efficacy of fee-based advertisement with alternative free recruitment strategies to a DCM Internet health survey. Methods An Internet health survey (SurveyMonkey) accessed by a new DCM Internet platform (myelopathy.org) was created. Using multiple survey collectors and the website’s Google Analytics, the efficacy of fee-based recruitment strategies (Google AdWords) and free alternatives (including Facebook, Twitter, and myelopathy.org) were compared. Results Overall, 760 surveys (513 [68%] fully completed) were accessed, 305 (40%) from fee-based strategies and 455 (60%) from free alternatives. Accounting for researcher time, fee-based strategies were more expensive ($7.8 per response compared to $3.8 per response for free alternatives) and identified a less motivated audience (Click-Through-Rate of 5% compared to 57% using free alternatives) but were more time efficient for the researcher (2 minutes per response compared to 16 minutes per response for free methods). Facebook was the most effective free strategy, providing 239 (31%) responses, where a single message to 4 existing communities yielded 133 (18%) responses within 7 days. Conclusions The Internet can efficiently reach large numbers of patients. Free and fee-based recruitment strategies both have merits. Facebook communities are a rich resource for Internet researchers. PMID:29402760

  6. Cost-effectiveness analysis of mammography and clinical breast examination strategies

    PubMed Central

    Ahern, Charlotte Hsieh; Shen, Yu

    2009-01-01

    Purpose Breast cancer screening by mammography and clinical breast exam are commonly used for early tumor detection. Previous cost-effectiveness studies considered mammography alone or did not account for all relevant costs. In this study, we assessed the cost-effectiveness of screening schedules recommended by three major cancer organizations and compared them with alternative strategies. We considered costs of screening examinations, subsequent work-up, biopsy, and treatment interventions after diagnosis. Methods We used a microsimulation model to generate women’s life histories, and assessed screening and treatment impacts on survival. Using statistical models, we accounted for age-specific incidence, preclinical disease duration, and age-specific sensitivity and specificity for each screening modality. The outcomes of interest were quality-adjusted life years (QALYs) saved and total costs with a 3% annual discount rate. Incremental cost-effectiveness ratios were used to compare strategies. Sensitivity analyses were performed by varying some of the assumptions. Results Compared to guidelines from the National Cancer Institute and the U.S. Preventive Services Task Force, alternative strategies were more efficient. Mammography and clinical breast exam in alternating years from ages 40 to 79 was a cost-effective alternative compared to the guidelines, costing $35,500 per QALY saved compared with no screening. The American Cancer Society guideline was the most effective and the most expensive, costing over $680,000 for an added QALY compared to the above alternative. Conclusion Screening strategies with lower costs and benefits comparable to those currently recommended should be considered for implementation in practice and for future guidelines. PMID:19258473

  7. Pattern of informed consent acquisition in patients undergoing emergent endovascular treatment for acute ischemic stroke

    PubMed Central

    Qureshi, Adnan I; Gilani, Sarwat; Adil, Malik M; Majidi, Shahram; Hassan, Ameer E; Miley, Jefferson T; Rodriguez, Gustavo J

    2014-01-01

    Background Telephone consent and two physician consents based on medical necessity are alternate strategies for time sensitive medical decisions but are not uniformly accepted for clinical practice or recruitment into clinical trials. We determined the rate of and associated outcomes with alternate consenting strategies in consecutive acute ischemic stroke patients receiving emergent endovascular treatment. Methods We divided patients into those treated based on in-person consent and those based on alternate strategies. We identified clinical and procedural differences and differences in hospital outcomes: symptomatic ICH and favorable outcome (defined by modified Rankin Scale of 0–2 at discharge) based on consenting methodology. Results Of a total of 159 patients treated, 119 were treated based on in-person consent (by the patient in 27 and legally authorized representative in 92 procedures). Another 40 patients were treated using alternate strategies (20 telephone consents and 20 two physician consents based on medical necessity). There was no difference in the mean ages and proportion of men among the two groups based on consenting methodology. There was a significantly greater time interval incurred between CT scan and initiation of endovascular procedure in those in whom in-person consent was obtained (117 ± 65 min versus 101 ± 45 min, p = 0.01). There was no significant difference in rates of ICH (9% versus 8%, p = 0.9), or favorable outcome at discharge (28% versus 30%, p = 0.8). Conclusions Consent through alternate strategies does not adversely affect procedural characteristics or outcome of patients and may be more time efficient than in-person consenting process. PMID:25132906

  8. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  9. Generation and investigation of terahertz Airy beam realized using parallel-plate waveguides

    NASA Astrophysics Data System (ADS)

    Wu, Mengru; Lang, Tingting; Shi, Guohua; Han, Zhanghua

    2018-03-01

    In this paper, the launching of Airy beam in the terahertz region using waveguiding structures was proposed, designed and numerically characterized. By properly designing the waveguide slit width and the packing number in different sections of parallel-plate waveguides (PPWGs) array, arbitrary phase delay and lateral position-dependent amplitude transmission through the structure, required to realize the target Airy beam profile, can be easily fulfilled. Airy beams working at the frequency of 0.3 THz with good non-diffracting, self-bending, and self-healing features are demonstrated. This study represents a new alternative to scattering-based metasurface structures, and can be utilized in many modern applications.

  10. A constraint logic programming approach to associate 1D and 3D structural components for large protein complexes.

    PubMed

    Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang

    2007-01-01

    The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.

  11. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  12. Population annealing with weighted averages: A Monte Carlo method for rough free-energy landscapes

    NASA Astrophysics Data System (ADS)

    Machta, J.

    2010-08-01

    The population annealing algorithm introduced by Hukushima and Iba is described. Population annealing combines simulated annealing and Boltzmann weighted differential reproduction within a population of replicas to sample equilibrium states. Population annealing gives direct access to the free energy. It is shown that unbiased measurements of observables can be obtained by weighted averages over many runs with weight factors related to the free-energy estimate from the run. Population annealing is well suited to parallelization and may be a useful alternative to parallel tempering for systems with rough free-energy landscapes such as spin glasses. The method is demonstrated for spin glasses.

  13. Serial multiplier arrays for parallel computation

    NASA Technical Reports Server (NTRS)

    Winters, Kel

    1990-01-01

    Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.

  14. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  15. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  16. Optimized and parallelized implementation of the electronegativity equalization method and the atom-bond electronegativity equalization method.

    PubMed

    Vareková, R Svobodová; Koca, J

    2006-02-01

    The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.

  17. Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code

    NASA Astrophysics Data System (ADS)

    Payne, J.; McCune, D.; Prater, R.

    2010-11-01

    NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.

  18. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  19. Inflated speedups in parallel simulations via malloc()

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.

  20. Navigation strategy training using virtual reality in six chronic stroke patients: A novel and explorative approach to the rehabilitation of navigation impairment.

    PubMed

    Claessen, Michiel H G; van der Ham, Ineke J M; Jagersma, Elbrich; Visser-Meily, Johanna M A

    2016-10-01

    Recent studies have shown that navigation impairment is a common complaint after brain injury. Effective training programmes aiming to improve navigation ability in neurological patients are, however, scarce. The few reported programmes are merely focused on recalling specific routes rather than encouraging brain-damaged patients to use an alternative navigation strategy, applicable to any route. Our aim was therefore to investigate the feasibility of a (virtual reality) navigation training as a tool to instruct chronic stroke patients to adopt an alternative navigation strategy. Navigation ability was systematically assessed before the training. The training approach was then determined based on the individual pattern of navigation deficits of each patient. The use of virtual reality in the navigation strategy training in six middle-aged stroke patients was found to be highly feasible. Furthermore, five patients learned to (partially) apply an alternative navigation strategy in the virtual environment, suggesting that navigation strategies are mouldable rather than static. In the evaluation of their training experiences, the patients judged the training as valuable and proposed some suggestions for further improvement. The notion that the navigation strategy people use can be influenced after a short training procedure is a novel finding and initiates a direction for future studies.

  1. Alternative Strategies to Improve and Expand the Delivery of Vocational Education in Small, Rural, and/or Isolated Secondary Schools in Hawaii.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu. Office of the Director for Vocational Education.

    Intended for administrators of schools within the Hawaii Department of Education, this document provides descriptions of 34 alternative strategies implemented by small, rural, and/or isolated secondary schools across the nation to improve the quality of their vocational programs. Introductory materials discuss the document's purpose, the need for…

  2. Editor's Perspective Article: Alternative Certification Teachers--Strategies for the Transition to a New Career

    ERIC Educational Resources Information Center

    Evans, Brian R.

    2015-01-01

    New teachers who are prepared to teach through alternative certification pathways may find the transition to a new career stressful and tumultuous. There are techniques that can be used to help make the transition easier on new teachers as they begin their new careers. This article explores several strategies for new teachers, which include…

  3. Strategy for distribution of influenza vaccine to high-risk groups and children.

    PubMed

    Longini, Ira M; Halloran, M Elizabeth

    2005-02-15

    Despite evidence that vaccinating schoolchildren against influenza is effective in limiting community-level transmission, the United States has had a long-standing government strategy of recommending that vaccine be concentrated primarily in high-risk groups and distributed to those people who keep the health system and social infrastructure operating. Because of this year's influenza vaccine shortage, a plan was enacted to distribute the limited vaccine stock to these groups first. This vaccination strategy, based on direct protection of those most at risk, has not been very effective in reducing influenza morbidity and mortality. Although it is too late to make changes this year, the current influenza vaccine crisis affords the opportunity to examine an alternative for future years. The alternative plan, supported by mathematical models and influenza field studies, would be to concentrate vaccine in schoolchildren, the population group most responsible for transmission, while also covering the reachable high-risk groups, who would also receive considerable indirect protection. In conjunction with a plan to ensure an adequate vaccine supply, this alternative influenza vaccination strategy would help control interpandemic influenza and be instrumental in preparing for pandemic influenza. The effectiveness of the alternative plan could be assessed through nationwide community studies.

  4. Evidence for an alternation strategy in time-place learning.

    PubMed

    Pizzo, Matthew J; Crystal, Jonathon D

    2004-11-30

    Many different conclusions concerning what type of mechanism rats use to solve a daily time-place task have emerged in the literature. The purpose of this study was to test three competing explanations of time-place discrimination. Rats (n = 10) were tested twice daily in a T-maze, separated by approximately 7 h. Food was available at one location in the morning and another location in the afternoon. After the rats learned to visit each location at the appropriate time, tests were omitted to evaluate whether the rats were utilizing time-of-day (i.e., a circadian oscillator) or an alternation strategy (i.e., visiting a correct location is a cue to visit the next location). Performance on this test was significantly lower than chance, ruling out the use of time-of-day. A phase advance of the light cycle was conducted to test the alternation strategy and timing with respect to the light cycle (i.e., an interval timer). There was no difference between probe and baseline performance. These results suggest that the rats used an alternation strategy to meet the temporal and spatial contingencies in the time-place task.

  5. Micro/Nanoscale Parallel Patterning of Functional Biomolecules, Organic Fluorophores and Colloidal Nanocrystals

    PubMed Central

    2009-01-01

    We describe the design and optimization of a reliable strategy that combines self-assembly and lithographic techniques, leading to very precise micro-/nanopositioning of biomolecules for the realization of micro- and nanoarrays of functional DNA and antibodies. Moreover, based on the covalent immobilization of stable and versatile SAMs of programmable chemical reactivity, this approach constitutes a general platform for the parallel site-specific deposition of a wide range of molecules such as organic fluorophores and water-soluble colloidal nanocrystals. PMID:20596482

  6. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    NASA Astrophysics Data System (ADS)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  7. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  8. Solid oxide fuel cell having compound cross flow gas patterns

    DOEpatents

    Fraioli, A.V.

    1983-10-12

    A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.

  9. Solid oxide fuel cell having compound cross flow gas patterns

    DOEpatents

    Fraioli, Anthony V.

    1985-01-01

    A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.

  10. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  11. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  12. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    NASA Astrophysics Data System (ADS)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  14. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  15. A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.

    PubMed

    Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang

    2018-01-01

    The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.

  16. Fast parallel 3D profilometer with DMD technology

    NASA Astrophysics Data System (ADS)

    Hou, Wenmei; Zhang, Yunbo

    2011-12-01

    Confocal microscope has been a powerful tool for three-dimensional profile analysis. Single mode confocal microscope is limited by scanning speed. This paper presents a 3D profilometer prototype of parallel confocal microscope based on DMD (Digital Micromirror Device). In this system the DMD takes the place of Nipkow Disk which is a classical parallel scanning scheme to realize parallel lateral scanning technique. Operated with certain pattern, the DMD generates a virtual pinholes array which separates the light into multi-beams. The key parameters that affect the measurement (pinhole size and the lateral scanning distance) can be configured conveniently by different patterns sent to DMD chip. To avoid disturbance between two virtual pinholes working at the same time, a scanning strategy is adopted. Depth response curve both axial and abaxial were extract. Measurement experiments have been carried out on silicon structured sample, and axial resolution of 55nm is achieved.

  17. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  18. B-MIC: An Ultrafast Three-Level Parallel Sequence Aligner Using MIC.

    PubMed

    Cui, Yingbo; Liao, Xiangke; Zhu, Xiaoqian; Wang, Bingqiang; Peng, Shaoliang

    2016-03-01

    Sequence alignment is the central process for sequence analysis, where mapping raw sequencing data to reference genome. The large amount of data generated by NGS is far beyond the process capabilities of existing alignment tools. Consequently, sequence alignment becomes the bottleneck of sequence analysis. Intensive computing power is required to address this challenge. Intel recently announced the MIC coprocessor, which can provide massive computing power. The Tianhe-2 is the world's fastest supercomputer now equipped with three MIC coprocessors each compute node. A key feature of sequence alignment is that different reads are independent. Considering this property, we proposed a MIC-oriented three-level parallelization strategy to speed up BWA, a widely used sequence alignment tool, and developed our ultrafast parallel sequence aligner: B-MIC. B-MIC contains three levels of parallelization: firstly, parallelization of data IO and reads alignment by a three-stage parallel pipeline; secondly, parallelization enabled by MIC coprocessor technology; thirdly, inter-node parallelization implemented by MPI. In this paper, we demonstrate that B-MIC outperforms BWA by a combination of those techniques using Inspur NF5280M server and the Tianhe-2 supercomputer. To the best of our knowledge, B-MIC is the first sequence alignment tool to run on Intel MIC and it can achieve more than fivefold speedup over the original BWA while maintaining the alignment precision.

  19. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  20. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  1. An integrative strategy to identify the entire protein coding potential of prokaryotic genomes by proteogenomics.

    PubMed

    Omasits, Ulrich; Varadarajan, Adithi R; Schmid, Michael; Goetze, Sandra; Melidis, Damianos; Bourqui, Marc; Nikolayeva, Olga; Québatte, Maxime; Patrignani, Andrea; Dehio, Christoph; Frey, Juerg E; Robinson, Mark D; Wollscheid, Bernd; Ahrens, Christian H

    2017-12-01

    Accurate annotation of all protein-coding sequences (CDSs) is an essential prerequisite to fully exploit the rapidly growing repertoire of completely sequenced prokaryotic genomes. However, large discrepancies among the number of CDSs annotated by different resources, missed functional short open reading frames (sORFs), and overprediction of spurious ORFs represent serious limitations. Our strategy toward accurate and complete genome annotation consolidates CDSs from multiple reference annotation resources, ab initio gene prediction algorithms and in silico ORFs (a modified six-frame translation considering alternative start codons) in an integrated proteogenomics database (iPtgxDB) that covers the entire protein-coding potential of a prokaryotic genome. By extending the PeptideClassifier concept of unambiguous peptides for prokaryotes, close to 95% of the identifiable peptides imply one distinct protein, largely simplifying downstream analysis. Searching a comprehensive Bartonella henselae proteomics data set against such an iPtgxDB allowed us to unambiguously identify novel ORFs uniquely predicted by each resource, including lipoproteins, differentially expressed and membrane-localized proteins, novel start sites and wrongly annotated pseudogenes. Most novelties were confirmed by targeted, parallel reaction monitoring mass spectrometry, including unique ORFs and single amino acid variations (SAAVs) identified in a re-sequenced laboratory strain that are not present in its reference genome. We demonstrate the general applicability of our strategy for genomes with varying GC content and distinct taxonomic origin. We release iPtgxDBs for B. henselae , Bradyrhizobium diazoefficiens and Escherichia coli and the software to generate both proteogenomics search databases and integrated annotation files that can be viewed in a genome browser for any prokaryote. © 2017 Omasits et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Optimization of operation conditions for the startup of aerobic granular sludge reactors biologically removing carbon, nitrogen, and phosphorous.

    PubMed

    Lochmatter, Samuel; Holliger, Christof

    2014-08-01

    The transformation of conventional flocculent sludge to aerobic granular sludge (AGS) biologically removing carbon, nitrogen and phosphorus (COD, N, P) is still a main challenge in startup of AGS sequencing batch reactors (AGS-SBRs). On the one hand a rapid granulation is desired, on the other hand good biological nutrient removal capacities have to be maintained. So far, several operation parameters have been studied separately, which makes it difficult to compare their impacts. We investigated seven operation parameters in parallel by applying a Plackett-Burman experimental design approach with the aim to propose an optimized startup strategy. Five out of the seven tested parameters had a significant impact on the startup duration. The conditions identified to allow a rapid startup of AGS-SBRs with good nutrient removal performances were (i) alternation of high and low dissolved oxygen phases during aeration, (ii) a settling strategy avoiding too high biomass washout during the first weeks of reactor operation, (iii) adaptation of the contaminant load in the early stage of the startup in order to ensure that all soluble COD was consumed before the beginning of the aeration phase, (iv) a temperature of 20 °C, and (v) a neutral pH. Under such conditions, it took less than 30 days to produce granular sludge with high removal performances for COD, N, and P. A control run using this optimized startup strategy produced again AGS with good nutrient removal performances within four weeks and the system was stable during the additional operation period of more than 50 days. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Understanding paradigms used for nursing research.

    PubMed

    Weaver, Kathryn; Olson, Joanne K

    2006-02-01

    The aims of this paper are to add clarity to the discussion about paradigms for nursing research and to consider integrative strategies for the development of nursing knowledge. Paradigms are sets of beliefs and practices, shared by communities of researchers, which regulate inquiry within disciplines. The various paradigms are characterized by ontological, epistemological and methodological differences in their approaches to conceptualizing and conducting research, and in their contribution towards disciplinary knowledge construction. Researchers may consider these differences so vast that one paradigm is incommensurable with another. Alternatively, researchers may ignore these differences and either unknowingly combine paradigms inappropriately or neglect to conduct needed research. To accomplish the task of developing nursing knowledge for use in practice, there is a need for a critical, integrated understanding of the paradigms used for nursing inquiry. We describe the evolution and influence of positivist, postpositivist, interpretive and critical theory research paradigms. Using integrative review, we compare and contrast the paradigms in terms of their philosophical underpinnings and scientific contribution. A pragmatic approach to theory development through synthesis of cumulative knowledge relevant to nursing practice is suggested. This requires that inquiry start with assessment of existing knowledge from disparate studies to identify key substantive content and gaps. Knowledge development in under-researched areas could be accomplished through integrative strategies that preserve theoretical integrity and strengthen research approaches associated with various philosophical perspectives. These strategies may include parallel studies within the same substantive domain using different paradigms; theoretical triangulation to combine findings from paradigmatically diverse studies; integrative reviews; and mixed method studies. Nurse scholars are urged to consider the benefits and limitations of inquiry within each paradigm, and the theoretical needs of the discipline.

  4. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  5. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  6. Football's coming home?: Digital reterritorialization, contradictions in the transnational coverage of sport and the sociology of alternative football broadcasts.

    PubMed

    David, Matthew; Millward, Peter

    2012-06-01

    This article critically utilizes the work of Manuel Castells to discuss the issue of parallel imported broadcasts (specifically including live-streams) in football. This is of crucial importance to sport because the English Premier League is premised upon the sale of television rights broadcasts to domestic and overseas markets, and yet cheaper alternative broadcasts endanger the price of such rights. Evidence is drawn from qualitative fieldwork and library/Internet sources to explore the practices of supporters and the politics involved in the generation of alternative broadcasts. This enables us to clarify the core sociological themes of 'milieu of innovation' and 'locale' within today's digitally networked global society. © London School of Economics and Political Science 2012.

  7. Effectiveness of alternative rail passenger equipment crashworthiness strategies

    DOT National Transportation Integrated Search

    2006-04-04

    Crashworthiness strategies, which include crash energy : management (CEM), pushback couplers, and push/pull : operation, are evaluated and compared under specific collision : conditions. Comparisons of three strategies are evaluated in : this paper: ...

  8. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  9. Local search to improve coordinate-based task mapping

    DOE PAGES

    Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...

    2015-10-31

    We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less

  10. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  11. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  12. Parallel-processing with surface plasmons, a new strategy for converting the broad solar spectrum

    NASA Technical Reports Server (NTRS)

    Anderson, L. M.

    1982-01-01

    A new strategy for efficient solar-energy conversion is based on parallel processing with surface plasmons: guided electromagnetic waves supported on thin films of common metals like aluminum or silver. The approach is unique in identifying a broadband carrier with suitable range for energy transport and an inelastic tunneling process which can be used to extract more energy from the more energetic carriers without requiring different materials for each frequency band. The aim is to overcome the fundamental 56-percent loss associated with mismatch between the broad solar spectrum and the monoenergetic conduction electrons used to transport energy in conventional silicon solar cells. This paper presents a qualitative discussion of the unknowns and barrier problems, including ideas for coupling surface plasmons into the tunnels, a step which has been the weak link in the efficiency chain.

  13. Coping with changing conditions: alternative strategies for the delivery of maternal and child health and family planning services in Dhaka, Bangladesh.

    PubMed Central

    Routh, S.; el Arifeen, S.; Jahan, S. A.; Begum, A.; Thwin, A. A.; Baqui, A. H.

    2001-01-01

    The door-to-door distribution of contraceptives and information on maternal and child health and family planning (MCH-FP) services, through bimonthly visits to eligible couples by trained fieldworkers, has been instrumental in increasing the contraceptive prevalence rate and immunization coverage in Bangladesh. The doorstep delivery strategy, however, is labour-intensive and costly. More cost-effective service delivery strategies are needed, not only for family planning services but also for a broader package of reproductive and other essential health services. Against this backdrop, operations research was conducted by the Centre for Health and Population Research at the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) from January 1996 to May 1997, in collaboration with government agencies and a leading national nongovernmental organization, with a view to developing and field-testing alternative approaches to the delivery of MCH-FP services in urban areas. Two alternative strategies featuring the withdrawal of home-based distribution and the delivery of basic health care from fixed-site facilities were tested in two areas of Dhaka. The clinic-based service delivery strategy was found to be a feasible alternative to the resource-intensive doorstep system in urban Dhaka. It did not adversely affect programme performance and it allowed the needs of clients to be addressed holistically through a package of essential health and family planning services. PMID:11242821

  14. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  15. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  16. Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

    DTIC Science & Technology

    2014-09-01

    generation, exotic storage technologies, smart power grid management, and better power sources for directed-energy weapons (DEW). Accessible partner nation...near term will help to mitigate risks and improve outcomes. 2 Forecasting typically extrapolates predictions based...eventually, diminished national power . Within this context, this paper examines policy, legal, ethical, and strategy implications for DoD from the impact

  17. A novel methodology to characterize interfacility transfer strategies in a trauma transfer network.

    PubMed

    Gomez, David; Haas, Barbara; Larsen, Kristian; Alali, Aziz S; MacDonald, Russell D; Singh, Jeffrey M; Tien, Homer; Iwashyna, Theodore J; Rubenfeld, Gordon; Nathens, Avery B

    2016-10-01

    More than half of severely injured patients are initially transported from the scene of injury to nontrauma centers (NTCs), with many requiring subsequent transfer to trauma center (TC) care. Definitive care in the setting of severe injury is time sensitive. However, transferring severely injured patients from an NTC is a complex process often fraught with delays. Selection of the receiving TC and the mode of interfacility transport both strongly influence total transfer time and are highly amenable to quality improvement initiatives. We analyzed transfer strategies, defined as the pairing of a destination and mode of transport (land vs. rotary wing vs. fixed wing), for severely injured adult patients. Existing transfer strategies at each NTC were derived from trauma registry data. Geographic Information Systems network analysis was used to identify the strategy that minimized transfer times the most as well as alternate strategies (+15 or +30 minutes) for each NTC. Transfer network efficiency was characterized based on optimality and stability. We identified 7,702 severely injured adult patients transferred from 146 NTCs to 9 TCs. Nontrauma centers transferred severely injured patients to a median of 3 (interquartile range, 1-4) different TCs and utilized a median of 4 (interquartile range, 2-6) different transfer strategies. After allowing for the use of alternate transfer strategies, 73.1% of severely injured patients were transported using optimal/alternate strategies, and only 40.4% of NTCs transferred more than 90% of patients using an optimal/alternate transfer strategy. Three quarters (75.5%) of transfers occurred between NTCs and their most common receiving TC. More than a quarter of patients with severe traumatic injuries undergoing interfacility transport to a TC in Ontario are consistently transported using a nonoptimal combination of destination and mode of transport. Our novel analytic approach can be easily adapted to different system configurations and provides actionable data that can be provided to NTCs and other stakeholders. Therapeutic study, level IV.

  18. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  19. Short wavelength laser

    DOEpatents

    Hagelstein, P.L.

    1984-06-25

    A short wavelength laser is provided that is driven by conventional-laser pulses. A multiplicity of panels, mounted on substrates, are supported in two separated and alternately staggered facing and parallel arrays disposed along an approximately linear path. When the panels are illuminated by the conventional-laser pulses, single pass EUV or soft x-ray laser pulses are produced.

  20. Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data

    NASA Technical Reports Server (NTRS)

    Vanderesch, A. H.

    1972-01-01

    Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.

  1. An Alternative Approach to Capacitors in Complex Arrangements

    ERIC Educational Resources Information Center

    Atkin, Keith

    2012-01-01

    Examples of capacitive circuits easily reducible to series and parallel combinations abound in the textbooks but students are rarely exposed to examples where such simple procedures are apparently impossible. This paper extends that of a previous contributor by showing how the delta-star theorem of network theory can resolve such difficulties.…

  2. Chaos and Christianity: A Response to Butz and a Biblical Alternative.

    ERIC Educational Resources Information Center

    Watts, Richard E.; Trusty, Jerry

    1997-01-01

    M.R. Butz's position regarding chaos theory and Christianity is reviewed. The compatibility of biblical theology and the sciences is discussed. Parallels between chaos theory and the philosophical perspective of Soren Kierkegaard are explored. A biblical model is offered for counselors in assisting Christian clients in embracing chaos. (Author/EMK)

  3. Separation-Individuation Revisited: On the Interplay of Parent-Adolescent Relations, Identity and Emotional Adjustment in Adolescence

    ERIC Educational Resources Information Center

    Meeus, Wim; Iedema, Jurjen; Maassen, Gerard; Engels, Rutger

    2005-01-01

    The objective of this study was to test our alternative interpretation of the separation-individuation hypothesis. This interpretation states that separation from the parents is not a precondition for individuation, but rather separation and individuation are two parallel processes of development during adolescence. We investigated our…

  4. Emerging Definitions of Leadership in Higher Education: New Visions of Leadership or Same Old "Hero" Leader?

    ERIC Educational Resources Information Center

    Eddy, Pamela L.; VanDerLinden, Kim E.

    2006-01-01

    The higher education literature suggests that alternative leadership styles are replacing the traditionally held definitions of leadership and provide new and different (and possibly superior) ways to understand leadership. This article looks for parallels within the current leadership literature to see if community college administrators use the…

  5. Interface colloidal robotic manipulator

    DOEpatents

    Aronson, Igor; Snezhko, Oleksiy

    2015-08-04

    A magnetic colloidal system confined at the interface between two immiscible liquids and energized by an alternating magnetic field dynamically self-assembles into localized asters and arrays of asters. The colloidal system exhibits locomotion and shape change. By controlling a small external magnetic field applied parallel to the interface, structures can capture, transport, and position target particles.

  6. Parallel family trees for transfer matrices in the Potts model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.

  7. Ordered fast fourier transforms on a massively parallel hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Tong, Charles; Swarztrauber, Paul N.

    1989-01-01

    Design alternatives for ordered Fast Fourier Transformation (FFT) algorithms were examined on massively parallel hypercube multiprocessors such as the Connection Machine. Particular emphasis is placed on reducing communication which is known to dominate the overall computing time. To this end, the order and computational phases of the FFT were combined, and the sequence to processor maps that reduce communication were used. The class of ordered transforms is expanded to include any FFT in which the order of the transform is the same as that of the input sequence. Two such orderings are examined, namely, standard-order and A-order which can be implemented with equal ease on the Connection Machine where orderings are determined by geometries and priorities. If the sequence has N = 2 exp r elements and the hypercube has P = 2 exp d processors, then a standard-order FFT can be implemented with d + r/2 + 1 parallel transmissions. An A-order sequence can be transformed with 2d - r/2 parallel transmissions which is r - d + 1 fewer than the standard order. A parallel method for computing the trigonometric coefficients is presented that does not use trigonometric functions or interprocessor communication. A performance of 0.9 GFLOPS was obtained for an A-order transform on the Connection Machine.

  8. MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun

    Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less

  9. Apparatus and methods for cooling and sealing rotary helical screw compressors

    DOEpatents

    Fresco, A.N.

    1997-08-05

    In a compression system which incorporates a rotary helical screw compressor, and for any type of gas or refrigerant, the working liquid oil is atomized through nozzles suspended in, and parallel to, the suction gas flow, or alternatively the nozzles are mounted on the suction piping. In either case, the aim is to create positively a homogeneous mixture of oil droplets to maximize the effectiveness of the working liquid oil in improving the isothermal and volumetric efficiencies. The oil stream to be atomized may first be degassed at compressor discharge pressure by heating within a pressure vessel and recovering the energy added by using the outgoing oil stream to heat the incoming oil stream. The stripped gas is typically returned to the compressor discharge flow. In the preferred case, the compressor rotors both contain a hollow cavity through which working liquid oil is injected into channels along the edges of the rotors, thereby forming a continuous and positive seal between the rotor edges and the compressor casing. In the alternative method, working liquid oil is injected either in the same direction as the rotor rotation or counter to rotor rotation through channels in the compressor casing which are tangential to the rotor edges and parallel to the rotor center lines or alternatively the channel paths coincide with the helical path of the rotor edges. 14 figs.

  10. Apparatus and methods for cooling and sealing rotary helical screw compressors

    DOEpatents

    Fresco, Anthony N.

    1997-01-01

    In a compression system which incorporates a rotary helical screw compressor, and for any type of gas or refrigerant, the working liquid oil is atomized through nozzles suspended in, and parallel to, the suction gas flow, or alternatively the nozzles are mounted on the suction piping. In either case, the aim is to create positively a homogeneous mixture of oil droplets to maximize the effectiveness of the working liquid oil in improving the isothermal and volumetric efficiencies. The oil stream to be atomized may first be degassed at compressor discharge pressure by heating within a pressure vessel and recovering the energy added by using the outgoing oil stream to heat the incoming oil stream. The stripped gas is typically returned to the compressor discharge flow. In the preferred case, the compressor rotors both contain a hollow cavity through which working liquid oil is injected into channels along the edges of the rotors, thereby forming a continuous and positive seal between the rotor edges and the compressor casing. In the alternative method, working liquid oil is injected either in the same direction as the rotor rotation or counter to rotor rotation through channels in the compressor casing which are tangential to the rotor edges and parallel to the rotor centerlines or alternatively the channel paths coincide with the helical path of the rotor edges.

  11. MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization

    DOE PAGES

    Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun

    2017-10-30

    Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less

  12. New Tools and Methods for Assessing Risk-Management Strategies

    DTIC Science & Technology

    2004-03-01

    Theories to evaluate the risks and benefits of various acquisition alternatives and allowed researchers to monitor the process students used to make a...revealed distinct risk-management strategies. 15. SUBJECT TERMS risk managements, acquisition process, expected value theory , multi-attribute utility theory ...Utility Theories to evaluate the risks and benefits of various acquisition alternatives, and allowed us to monitor the process subjects used to arrive at

  13. Evaluating alternative fuel treatment strategies to reduce wildfire losses in a Mediterranean area

    Treesearch

    Michele Salis; Maurizio Laconi; Alan A. Ager; Fermin J. Alcasena; Bachisio Arca; Olga Lozano; Ana Fernandes de Oliveira; Donatella Spano

    2016-01-01

    The goal of this work is to evaluate by a modeling approach the effectiveness of alternative fuel treatment strategies to reduce potential losses from wildfires in Mediterranean areas. We compared strategic fuel treatments located near specific human values vs random locations, and treated 3, 9 and 15% of a 68,000 ha study area located in Sardinia, Italy. The...

  14. Plane representations of graphs and visibility between parallel segments

    NASA Astrophysics Data System (ADS)

    Tamassia, R.; Tollis, I. G.

    1985-04-01

    Several layout compaction strategies for VLSI are based on the concept of visibility between parallel segments, where we say that two parallel segments of a given set are visible if they can be joined by a segment orthogonal to them, which does not intersect any other segment. This paper studies visibility representations of graphs, which are constructed by mapping vertices to horizontal segments, and edges to vertical segments drawn between visible vertex-segments. Clearly, every graph that admits such a representation must be a planar. The authors consider three types of visibility representations, and give complete characterizations of the classes of graphs that admit them. Furthermore, they present linear time algorithms for testing the existence of and constructing visibility representations of planar graphs.

  15. A parallel strategy for predicting the secondary structure of polycistronic microRNAs.

    PubMed

    Han, Dianwei; Tang, Guiliang; Zhang, Jun

    2013-01-01

    The biogenesis of a functional microRNA is largely dependent on the secondary structure of the microRNA precursor (pre-miRNA). Recently, it has been shown that microRNAs are present in the genome as the form of polycistronic transcriptional units in plants and animals. It will be important to design efficient computational methods to predict such structures for microRNA discovery and its applications in gene silencing. In this paper, we propose a parallel algorithm based on the master-slave architecture to predict the secondary structure from an input sequence. We conducted some experiments to verify the effectiveness of our parallel algorithm. The experimental results show that our algorithm is able to produce the optimal secondary structure of polycistronic microRNAs.

  16. Classroom management of situated group learning: A research study of two teaching strategies

    NASA Astrophysics Data System (ADS)

    Smeh, Kathy; Fawns, Rod

    2000-06-01

    Although peer-based work is encouraged by theories in developmental psychology and although classroom interventions suggest it is effective, there are grounds for recognising that young pupils find collaborative learning hard to sustain. Discontinuities in collaborative skill during development have been suggested as one interpretation. Theory and research have neglected situational continuities that the teacher may provide in management of formal and informal collaborations. This experimental study, with the collaboration of the science faculty in one urban secondary college, investigated the effect of two role attribution strategies on communication in peer groups of different gender composition in three parallel Year 8 science classes. The group were set a problem that required them to design an experiment to compare the thermal insulating properties of two different materials. This presents the data collected and key findings, and reviews the findings from previous parallel studies that have employed the same research design in different school settings. The results confirm the effectiveness of social role attribution strategies in teacher management of communication in peer-based work.

  17. Parallel microscope-based fluorescence, absorbance and time-of-flight mass spectrometry detection for high performance liquid chromatography and determination of glucosamine in urine.

    PubMed

    Xiong, Bo; Wang, Ling-Ling; Li, Qiong; Nie, Yu-Ting; Cheng, Shuang-Shuang; Zhang, Hui; Sun, Ren-Qiang; Wang, Yu-Jiao; Zhou, Hong-Bin

    2015-11-01

    A parallel microscope-based laser-induced fluorescence (LIF), ultraviolet-visible absorbance (UV) and time-of-flight mass spectrometry (TOF-MS) detection for high performance liquid chromatography (HPLC) was achieved and used to determine glucosamine in urines. First, a reliable and convenient LIF detection was developed based on an inverted microscope and corresponding modulations. Parallel HPLC-LIF/UV/TOF-MS detection was developed by the combination of preceding Microscope-based LIF detection and HPLC coupled with UV and TOF-MS. The proposed setup, due to its parallel scheme, was free of the influence from photo bleaching in LIF detection. Rhodamine B, glutamic acid and glucosamine have been determined to evaluate its performance. Moreover, the proposed strategy was used to determine the glucosamine in urines, and subsequent results suggested that glucosamine, which was widely used in the prevention of the bone arthritis, was metabolized to urines within 4h. Furthermore, its concentration in urines decreased to 5.4mM at 12h. Efficient glucosamine detection was achieved based on a sensitive quantification (LIF), a universal detection (UV) and structural characterizations (TOF-MS). This application indicated that the proposed strategy was sensitive, universal and versatile, and it was capable of improved analysis, especially for analytes with low concentrations in complex samples, compared with conventional HPLC-UV/TOF-MS. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  19. Pulmonary vein stenosis following catheter ablation of atrial fibrillation.

    PubMed

    Pürerfellner, Helmut; Martinek, Martin

    2005-11-01

    This review provides an update on the mechanisms, incidence, and current management of significant pulmonary vein stenosis following catheter ablation of atrial fibrillation. Catheter ablation involving the pulmonary veins and the surrounding left atrial tissue is increasingly used to treat atrial fibrillation. In parallel with the fact that these procedures may cure a substantial proportion of patients, severe complications have been observed. Pulmonary vein stenosis is a new clinical entity produced by radiofrequency energy delivery mainly within or at the orifice of the pulmonary veins. The exact incidence is currently unknown because the diagnosis is dependent on the imaging modality and on the rigor with which patients are followed up. The optimal method for screening patients has not been determined. Stenosis of a pulmonary vein may be assessed by combining anatomic and functional imaging using computed tomographic or magnetic resonance imaging, transesophageal echocardiography, and lung scanning. Symptoms vary considerably and may be misdiagnosed, leading to severe clinical consequences. Current treatment strategies involve pulmonary vein dilatation or stenting; however, the restenosis rate remains high. The long-term outcome in patients with pulmonary vein stenosis is unclear. Strategies under development to prevent pulmonary vein stenosis include alternate energy sources and modified ablation techniques. Pulmonary vein stenosis following catheter ablation is a new clinical entity that has been described in various reports recently. There is much uncertainty with respect to causative factors, incidence, diagnosis, and treatment, and long-term sequelae are unclear.

  20. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

Top