Mixed methods in gerontological research: Do the qualitative and quantitative data “touch”?
Happ, Mary Beth
2010-01-01
This paper distinguishes between parallel and integrated mixed methods research approaches. Barriers to integrated mixed methods approaches in gerontological research are discussed and critiqued. The author presents examples of mixed methods gerontological research to illustrate approaches to data integration at the levels of data analysis, interpretation, and research reporting. As a summary of the methodological literature, four basic levels of mixed methods data combination are proposed. Opportunities for mixing qualitative and quantitative data are explored using contemporary examples from published studies. Data transformation and visual display, judiciously applied, are proposed as pathways to fuller mixed methods data integration and analysis. Finally, practical strategies for mixing qualitative and quantitative data types are explicated as gerontological research moves beyond parallel mixed methods approaches to achieve data integration. PMID:20077973
ERIC Educational Resources Information Center
Kerrigan, Monica Reid
2014-01-01
This convergent parallel design mixed methods case study of four community colleges explores the relationship between organizational capacity and implementation of data-driven decision making (DDDM). The article also illustrates purposive sampling using replication logic for cross-case analysis and the strengths and weaknesses of quantitizing…
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Tan, Christabel Kl; Davies, Matthew J; McCluskey, Daniel K; Munro, Ian R; Nweke, Mauryn C; Tracey, Mark C; Szita, Nicolas
2015-10-01
Microbioreactors have emerged as novel tools for early bioprocess development. Mixing lies at the heart of bioreactor operation (at all scales). The successful implementation of micro-stirring methods is thus central to the further advancement of microbioreactor technology. The aim of this study was to develop a micro-stirring method that aids robust microbioreactor operation and facilitates cost-effective parallelization. A microbioreactor was developed with a novel micro-stirring method involving the movement of a magnetic bead by sequenced activation of a ring of electromagnets. The micro-stirring method offers flexibility in chamber designs, and mixing is demonstrated in cylindrical, diamond and triangular shaped reactor chambers. Mixing was analyzed for different electromagnet on/off sequences; mixing times of 4.5 s, 2.9 s, and 2.5 s were achieved for cylindrical, diamond and triangular shaped chambers, respectively. Ease of micro-bubble free priming, a typical challenge of cylindrical shaped microbioreactor chambers, was obtained with a diamond-shaped chamber. Consistent mixing behavior was observed between the constituent reactors in a duplex system. A novel stirring method using electromagnetic actuation offering rapid mixing and easy integration with microbioreactors was characterized. The design flexibility gained enables fabrication of chambers suitable for microfluidic operation, and a duplex demonstrator highlights potential for cost-effective parallelization. Combined with a previously published cassette-like fabrication of microbioreactors, these advances will facilitate the development of robust and parallelized microbioreactors. © 2015 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Invisible nursing research: thoughts about mixed methods research and nursing practice.
Fawcett, Jacqueline
2015-04-01
In this this essay, the author addresses the close connection between mixed methods research and nursing practice. If the assertion that research and practice are parallel processes is accepted, then nursing practice may be considered "invisible mixed methods research," in that almost every encounter between a nurse and a patient involves collection and integration of qualitative (word) and quantitative (number) information that actually is single-case mixed methods research. © The Author(s) 2015.
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
Mixed methods research in mental health nursing.
Kettles, A M; Creswell, J W; Zhang, W
2011-08-01
Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.
Investigating Learning with an Interactive Tutorial: A Mixed-Methods Strategy
ERIC Educational Resources Information Center
de Villiers, M. R.; Becker, Daphne
2017-01-01
From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called "Relations." "Relations"…
Being Outside Learning about Science Is Amazing: A Mixed Methods Study
ERIC Educational Resources Information Center
Weibel, Michelle L.
2011-01-01
This study used a convergent parallel mixed methods design to examine teachers' environmental attitudes and concerns about an outdoor educational field trip. Converging both quantitative data (Environmental Attitudes Scale and teacher demographics) and qualitative data (Open-Ended Statements of Concern and interviews) facilitated interpretation.…
Mixing enhancement of reacting parallel fuel jets in a supersonic combustor
NASA Technical Reports Server (NTRS)
Drummond, J. P.
1991-01-01
Pursuant to a NASA-Langley development program for a scramjet HST propulsion system entailing the optimization of the scramjet combustor's fuel-air mixing and reaction characteristics, a numerical study has been conducted of the candidate parallel fuel injectors. Attention is given to a method for flow mixing-process and combustion-efficiency enhancement in which a supersonic circular hydrogen jet coflows with a supersonic air stream. When enhanced by a planar oblique shock, the injector configuration exhibited a substantial degree of induced vorticity in the fuel stream which increased mixing and chemical reaction rates, relative to the unshocked configuration. The resulting heat release was effective in breaking down the stable hydrogen vortex pair that had inhibited more extensive fuel-air mixing.
Veteran Teacher Engagement in Site-Based Professional Development: A Mixed Methods Study
ERIC Educational Resources Information Center
Houston, Biaze L.
2016-01-01
This research study examined how teachers self-report their levels of engagement, which factors they believe contribute most to their engagement, and which assumptions of andragogy most heavily influence teacher engagement in site-based professional development. This study employed a convergent parallel mixed methods design to study veteran…
Technology-Enhanced Multimedia Instruction in Foreign Language Classrooms: A Mixed Methods Study
ERIC Educational Resources Information Center
Ketsman, Olha
2012-01-01
Technology-enhanced multimedia instruction in grades 6 through 12 foreign language classrooms was the focus of this study. The study's findings fill a gap in the literature through the report of how technology-enhanced multimedia instruction was successfully implemented in foreign language classrooms. Convergent parallel mixed methods study…
Ultra low injection angle fuel holes in a combustor fuel nozzle
York, William David
2012-10-23
A fuel nozzle for a combustor includes a mixing passage through which fluid is directed toward a combustion area and a plurality of swirler vanes disposed in the mixing passage. Each swirler vane of the plurality of swirler vanes includes at least one fuel hole through which fuel enters the mixing passage in an injection direction substantially parallel to an outer surface of the plurality of swirler vanes thereby decreasing a flameholding tendency of the fuel nozzle. A method of operating a fuel nozzle for a combustor includes flowing a fluid through a mixing passage past a plurality of swirler vanes and injecting a fuel into the mixing passage in an injection direction substantially parallel to an outer surface of the plurality of swirler vanes.
A Neighborhood Notion of Emergent Literacy: One Mixed Methods Inquiry to Inform Community Learning
ERIC Educational Resources Information Center
Hoffman, Emily Brown; Whittingham, Colleen E.
2017-01-01
Using a convergent parallel mixed methods design, this study considered the early literacy and language environments actualized by childcare providers and parents of young children (ages 3-5) living in one large urban community in the United States of America. Both childcare providers and parents responded to questionnaires and participated in…
ERIC Educational Resources Information Center
Garcia, Gina A.; Huerta, Adrian H.; Ramirez, Jenesis J.; Patrón, Oscar E.
2017-01-01
As the number of Latino males entering college increases, there is a need to understand their unique leadership experiences. This study used a convergent parallel mixed methods design to understand what contexts contribute to Latino male undergraduate students' leadership development, capacity, and experiences. Quantitative data were gathered by…
ERIC Educational Resources Information Center
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan
2012-01-01
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
CFD simulation of local and global mixing time in an agitated tank
NASA Astrophysics Data System (ADS)
Li, Liangchao; Xu, Bin
2017-01-01
The Issue of mixing efficiency in agitated tanks has drawn serious concern in many industrial processes. The turbulence model is very critical to predicting mixing process in agitated tanks. On the basis of computational fluid dynamics(CFD) software package Fluent 6.2, the mixing characteristics in a tank agitated by dual six-blade-Rushton-turbines(6-DT) are predicted using the detached eddy simulation(DES) method. A sliding mesh(SM) approach is adopted to solve the rotation of the impeller. The simulated flow patterns and liquid velocities in the agitated tank are verified by experimental data in the literature. The simulation results indicate that the DES method can obtain more flow details than Reynolds-averaged Navier-Stokes(RANS) model. Local and global mixing time in the agitated tank is predicted by solving a tracer concentration scalar transport equation. The simulated results show that feeding points have great influence on mixing process and mixing time. Mixing efficiency is the highest for the feeding point at location of midway of the two impellers. Two methods are used to determine global mixing time and get close result. Dimensionless global mixing time remains unchanged with increasing of impeller speed. Parallel, merging and diverging flow pattern form in the agitated tank, respectively, by changing the impeller spacing and clearance of lower impeller from the bottom of the tank. The global mixing time is the shortest for the merging flow, followed by diverging flow, and the longest for parallel flow. The research presents helpful references for design, optimization and scale-up of agitated tanks with multi-impeller.
Instructional Coaching in a Small District: A Mixed Methods Study of Teachers' Concerns
ERIC Educational Resources Information Center
Mayfield, Melissa J.
2016-01-01
This study utilized a convergent parallel mixed methods design to study teachers' concerns during implementation of instructional coaching for math in a rural PK-12 district in north Texas over a three-year time period. Five campuses were included in the study: one high school (grades 9-12), one middle school (grades 6-8), and three elementary…
ERIC Educational Resources Information Center
Costa, Ann Marie
2012-01-01
A recent law in a New England state allowed public schools to operate with increased flexibility and autonomy through the authorization of the creation of Innovation Schools. This project study, a program evaluation using a convergent parallel mixed methods research design, allowed for a comprehensive evaluation of the first Innovation School…
Guidance for using mixed methods design in nursing practice research.
Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia
2016-08-01
The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Senra, Hugo
2013-01-01
The current pilot study aims to explore whether different adults' experiences of lower-limb amputation could be associated with different levels of depression. To achieve these study objectives, a convergent parallel mixed methods design was used in a convenience sample of 42 adult amputees (mean age of 61 years; SD = 13.5). All of them had…
ERIC Educational Resources Information Center
Isyar, Özge Özgür; Akay, Cenk
2017-01-01
The purpose of this research is to determine the classroom teachers' sense of efficacy about the drama in education, to examine them in terms of various variables and to reveal their opinions and metaphorical perceptions regarding the concept of drama in education. Convergent parallel design, which is of the mixed method designs, was used in the…
Newcomer Immigrant Adolescents: A Mixed-Methods Examination of Family Stressors and School Outcomes
ERIC Educational Resources Information Center
Patel, Sita G.; Clarke, Annette V.; Eltareb, Fazia; Macciomei, Erynn E.; Wickham, Robert E.
2016-01-01
Family stressors predict negative psychological outcomes for immigrant adolescents, yet little is known about how such stressors interact to predict school outcomes. The purpose of this study was to explore the interactive role of family stressors on school outcomes for newcomer adolescent immigrants. Using a convergent parallel mixed-methods…
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Parallelized implicit propagators for the finite-difference Schrödinger equation
NASA Astrophysics Data System (ADS)
Parker, Jonathan; Taylor, K. T.
1995-08-01
We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.
ERIC Educational Resources Information Center
Bullock, Emma P.; Shumway, Jessica F.; Watts, Christina M.; Moyer-Packenham, Patricia S.
2017-01-01
The purpose of this study was to contribute to the research on mathematics app use by very young children, and specifically mathematics apps for touch-screen mobile devices that contain virtual manipulatives. The study used a convergent parallel mixed methods design, in which quantitative and qualitative data were collected in parallel, analyzed…
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442
ERIC Educational Resources Information Center
Faraone, Stephen V.; Wigal, Sharon B.; Hodgkins, Paul
2007-01-01
Objective: Compare observed and forecasted efficacy of mixed amphetamine salts extended release (MAS-XR; Adderall) with atomoxetine (Strattera) in ADHD children. Method: The authors analyze data from a randomized, double-blind, multicenter, parallel-group, forced-dose-escalation laboratory school study of children ages 6 to 12 with ADHD combined…
A Systematic Review of Mixed Methods Research on Human Factors and Ergonomics in Health Care
Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail
2016-01-01
This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research. PMID:26154228
Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R
2017-01-01
We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.
School Principals' Opinions on In-Class Inspections
ERIC Educational Resources Information Center
Kayikci, Kemal; Sahin, Ahmet; Canturk, Gokhan
2016-01-01
The aim of this research is to determine school principals' opinions on the in-class inspections carried out by inspectors of the Ministry of National Education of Turkey (MEB). The study was modeled as a convergent parallel design, one of the mixed methods which combined qualitative and quantitative methods. For data collection, the researchers…
Parallel realities: exploring poverty dynamics using mixed methods in rural Bangladesh.
Davisa, Peter; Baulch, Bob
2011-01-01
This paper explores the implications of using two methodological approaches to study poverty dynamics in rural Bangladesh. Using data from a unique longitudinal study, we show how different methods lead to very different assessments of socio-economic mobility. We suggest five ways of reconciling these differences: considering assets in addition to expenditures, proximity to the poverty line, other aspects of well-being, household division, and qualitative recall errors. Considering assets and proximity to the poverty line along with expenditures resolves three-fifths of the qualitative and quantitative differences. Use of such integrated mixed-methods can therefore improve the reliability of poverty dynamics research.
ERIC Educational Resources Information Center
Fidan, Nuray Kurtdede; Ergün, Mustafa
2016-01-01
In this study, social, literary and technological sources used by classroom teachers in social studies courses are analyzed in terms of frequency. The study employs mixed methods research and is designed following the convergent parallel design. In the qualitative part of the study, phenomenological method was used and in the quantitative…
Center for Parallel Optimization
1993-09-30
BOLLING AFB DC 20332-0001 _ii _ 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION/ AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE APPROVED FOR PUBLIC RELEASE...Machines Corporation, March 16-19, 1993 , A Branch- and-Bound Method for Mixed Integer Programming on the CM-.5 "* Dr. Roberto Musmanno, University of
Applications of mixed-methods methodology in clinical pharmacy research.
Hadi, Muhammad Abdul; Closs, S José
2016-06-01
Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.
A systematic review of mixed methods research on human factors and ergonomics in health care.
Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail
2015-11-01
This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research. Copyright © 2015. Published by Elsevier Ltd.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
A novel parallel pipeline structure of VP9 decoder
NASA Astrophysics Data System (ADS)
Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan
2018-04-01
To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.
Faculty and Community Collaboration in Sustained Community-University Engagement Partnerships
ERIC Educational Resources Information Center
Allen, Angela Danyell
2009-01-01
This dissertation is a qualitative case study of the factors of collaboration between faculty and community partners in sustained community-university engagement partnerships at a public research university in the Midwest. Based on secondary data from an annual, online, mixed-method survey of faculty-reported engagement activity, parallel yet…
A Technological Acceptance of Remote Laboratory in Chemistry Education
ERIC Educational Resources Information Center
Ling, Wendy Sing Yii; Lee, Tien Tien; Tho, Siew Wei
2017-01-01
The purpose of this study is to evaluate the technological acceptance of Chemistry students, and the opinions of Chemistry lecturers and laboratory assistants towards the use of remote laboratory in Chemistry education. The convergent parallel design mixed method was carried out in this study. The instruments involved were questionnaire and…
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Carbothermic reduction with parallel heat sources
Troup, Robert L.; Stevenson, David T.
1984-12-04
Disclosed are apparatus and method of carbothermic direct reduction for producing an aluminum alloy from a raw material mix including aluminum oxide, silicon oxide, and carbon wherein parallel heat sources are provided by a combustion heat source and by an electrical heat source at essentially the same position in the reactor, e.g., such as at the same horizontal level in the path of a gravity-fed moving bed in a vertical reactor. The present invention includes providing at least 79% of the heat energy required in the process by the electrical heat source.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
NASA Astrophysics Data System (ADS)
Negrello, Camille; Gosselet, Pierre; Rey, Christian
2018-05-01
An efficient method for solving large nonlinear problems combines Newton solvers and Domain Decomposition Methods (DDM). In the DDM framework, the boundary conditions can be chosen to be primal, dual or mixed. The mixed approach presents the advantage to be eligible for the research of an optimal interface parameter (often called impedance) which can increase the convergence rate. The optimal value for this parameter is often too expensive to be computed exactly in practice: an approximate version has to be sought for, along with a compromise between efficiency and computational cost. In the context of parallel algorithms for solving nonlinear structural mechanical problems, we propose a new heuristic for the impedance which combines short and long range effects at a low computational cost.
Systems and methods for thermal imaging technique for measuring mixing of fluids
Booten, Charles; Tomerlin, Jeff; Winkler, Jon
2016-06-14
Systems and methods for thermal imaging for measuring mixing of fluids are provided. In one embodiment, a method for measuring mixing of gaseous fluids using thermal imaging comprises: positioning a thermal test medium parallel to a direction gaseous fluid flow from an outlet vent of a momentum source, wherein when the source is operating, the fluid flows across a surface of the medium; obtaining an ambient temperature value from a baseline thermal image of the surface; obtaining at least one operational thermal image of the surface when the fluid is flowing from the outlet vent across the surface, wherein the fluid has a temperature different than the ambient temperature; and calculating at least one temperature-difference fraction associated with at least a first position on the surface based on a difference between temperature measurements obtained from the at least one operational thermal image and the ambient temperature value.
Balanced Reading Basals and the Impact on Third-Grade Reading Achievement
ERIC Educational Resources Information Center
Dorsey, Windy
2015-01-01
This convergent parallel mixed method sought to determine if the reading program increased third-grade student achievement. The research questions of the study examined the reading achievement scores of third-grade students and the effectiveness of McGraw-Hill Reading Wonders™. Significant differences were observed when a paired sample t test…
ERIC Educational Resources Information Center
Fettahlioglu, Pinar
2018-01-01
The purpose of this study is to investigate the effect of argumentation implementation applied in the environmental science course on science teacher candidates' environmental education self-efficacy beliefs and perspectives according to environmental problems. In this mixed method research study, convergent parallel design was utilized.…
Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model
ERIC Educational Resources Information Center
Duman, Serap Nur; Akbas, Oktay
2017-01-01
This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…
Humor Climate of the Primary Schools
ERIC Educational Resources Information Center
Sahin, Ahmet
2018-01-01
The aim of this study is to determine the opinions primary school administrators and teachers on humor climates in primary schools. The study was modeled as a convergent parallel design, one of the mixed methods. The data gathered from 253 administrator questionnaires, and 651 teacher questionnaires was evaluated for the quantitative part of the…
Learning Processes in Blended Language Learning: A Mixed-Methods Approach
ERIC Educational Resources Information Center
Shahrokni, Seyed Abdollah; Talaeizadeh, Ali
2013-01-01
This article attempts to investigate the learning processes in blended language learning through assessing sources of information: logs, chat and forum scripts, and semi-structured interviews. Creating a MOODLE-based parallel component to face-to-face instruction for a group of EFL learners, we probed into 2,984 logged actions providing raw…
ERIC Educational Resources Information Center
Demir, Selcuk Besir; Pismek, Nuray
2018-01-01
In today's educational landscape, social studies classes are characterized by controversial issues (CIs) that teachers handle differently using various ideologies. These CIs have become more and more popular, particularly in heterogeneous communities. The actual classroom practices for teaching social studies courses are unclear in the context of…
ERIC Educational Resources Information Center
Sezer, Adem; Inel, Yusuf; Seçkin, Ahmet Çagdas; Uluçinar, Ufuk
2017-01-01
This study aimed to detect any relationship that may exist between classroom teacher candidates' class participation and their attention levels. The research method was a convergent parallel design, mixing quantitative and qualitative research techniques, and the study group was composed of 21 freshmen studying in the Classroom Teaching Department…
ERIC Educational Resources Information Center
Sundararajan, NarayanKripa; Adesope, Olusola; Cavagnetto, Andy
2017-01-01
To develop and nurture critical thinking, students must have opportunities to observe and practice critical thinking in the classroom. In this parallel mixed method classroom study, we investigate the role of collaborative concept mapping in the development of kindergarten learners' critical thinking skills of analysis and interpretation over a…
ERIC Educational Resources Information Center
Papa, Dorothy P.
2017-01-01
This exploratory mixed method convergent parallel study examined Connecticut Educational leadership preparation programs for the existence of mental health content to learn the extent to which pre-service school leaders are prepared for addressing student mental health. Interviews were conducted with school mental health experts and Connecticut…
ERIC Educational Resources Information Center
Wang, Isobel Kai-Hui
2018-01-01
The global population of students pursuing studies abroad continues to grow, and consequently their intercultural experiences are receiving greater research attention. However, research into long-term student sojourners' academic development and personal growth is still in its infancy. A parallel mixed method study was designed to investigate the…
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
Parallel and Scalable Clustering and Classification for Big Data in Geosciences
NASA Astrophysics Data System (ADS)
Riedel, M.
2015-12-01
Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Gao, Jia-Hong
Purpose: To investigate whether the direction of spin-lock field, either parallel or antiparallel to the rotating magnetization, has any effect on the spin-lock MRI signal and further on the quantitative measurement of T1ρ, in a clinical 3 T MRI system. Methods: The effects of inverted spin-lock field direction were investigated by acquiring a series of spin-lock MRI signals for an American College of Radiology MRI phantom, while the spin-lock field direction was switched between the parallel and antiparallel directions. The acquisition was performed for different spin-locking methods (i.e., for the single- and dual-field spin-locking methods) and for different levels ofmore » clinically feasible spin-lock field strength, ranging from 100 to 500 Hz, while the spin-lock duration was varied in the range from 0 to 100 ms. Results: When the spin-lock field was inverted into the antiparallel direction, the rate of MRI signal decay was altered and the T1ρ value, when compared to the value for the parallel field, was clearly different. Different degrees of such direction-dependency were observed for different spin-lock field strengths. In addition, the dependency was much smaller when the parallel and the antiparallel fields are mixed together in the dual-field method. Conclusions: The spin-lock field direction could impact the MRI signal and further the T1ρ measurement in a clinical MRI system.« less
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
ERIC Educational Resources Information Center
Taylor, Rosemarye T.; Pelletier, Kelly; Trimble, Todd; Ruiz, Eddie
2014-01-01
The purpose of these three parallel mixed method studies was to measure the effectiveness of an urban school district's 2011 Preparing New Principals Program (PNPP). Results supported the premise that preparing principals for school leadership in 2013 must develop them as instructional leaders who can improve teacher performance and student…
ERIC Educational Resources Information Center
Bozkur, B. Ümit; Erim, Ali; Çelik-Demiray, Pinar
2018-01-01
This research investigates the effect of individual voice training on pre-service Turkish language teachers' speaking skills. The main claim in this research is that teachers' most significant teaching instrument is their voice and it needs to be trained. The research was based on the convergent parallel mixed method. The quantitative part was…
Attitudes and Opinions of Special Education Candidate Teachers Regarding Digital Technology
ERIC Educational Resources Information Center
Ozdamli, Fezile
2017-01-01
Parallel to the rapid development of information and communication technology, the demand for its use in schools and classroom is increasing. So, the purpose of this study is to determine the attitudes and views of students who will be special education teachers in the future regarding digital technology on the use in education. A mixed method,…
ERIC Educational Resources Information Center
Goldhaber, Dan; Long, Mark C.; Person, Ann E.; Rooklyn, Jordan
2017-01-01
We investigate factors influencing student sign-ups for Washington State's College Bound Scholarship (CBS) program. We find a substantial share of eligible middle school students fail to sign the CBS, forgoing college financial aid. Student characteristics associated with signing the scholarship parallel characteristics of low-income students who…
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
NASA Astrophysics Data System (ADS)
Engdahl, N. B.
2016-12-01
Mixing rates in porous media have been a heavily research topic in recent years covering analytic, random, and structured fields. However, there are some persistent assumptions and common features to these models that raise some questions about the generality of the results. One of these commonalities is the orientation of the flow field with respect to the heterogeneity structure, which are almost always defined to be parallel each other if there is an elongated axis of permeability correlation. Given the vastly different tortuosities for flow parallel to bedding and flow transverse to bedding, this assumption of parallel orientation may have significant effects on reaction rates when natural flows deviate from this assumed setting. This study investigates the role of orientation on mixing and reaction rates in multi-scale, 3D heterogeneous porous media with varying degrees of anisotropy in the correlation structure. Ten realizations of a small flow field, with three anisotropy levels, were simulated for flow parallel and transverse to bedding. Transport was simulated in each model with an advective-diffusive random walk and reactions were simulated using the chemical Langevin equation. The reaction system is a vertically segregated, transverse mixing problem between two mobile reactants. The results show that different transport behaviors and reaction rates are obtained by simply rotating the direction of flow relative to bedding, even when the net flux in both directions is the same. This kind of behavior was observed for three different weightings of the initial condition: 1) uniform, 2) flux-based, and 3) travel time based. The different schemes resulted in 20-50% more mass formation in the transverse direction than the longitudinal. The greatest variability in mass was observed for the flux weights and these were proportionate to the level of anisotropy. The implications of this study are that flux or travel time weights do not provide any guarantee of a fair comparison in this kind of a mixing scenario and that the role of directional tendencies on reaction rates can be significant. Further, it may be necessary to include anisotropy in future upscaled models to create robust methods that give representative reaction rates for any flow direction relative to geologic bedding.
A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection
NASA Technical Reports Server (NTRS)
Buell, Jeffrey C.
1988-01-01
A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.
Samiei, Ehsan; de Leon Derby, Maria Diaz; den Berg, Andre Van; Hoorfar, Mina
2017-01-17
This paper presents an electrohydrodynamic technique for rapid mixing of droplets in open and closed digital microfluidic (DMF) platforms. Mixing is performed by applying a high frequency AC voltage to the coplanar or parallel electrodes, inducing circulation zones inside the droplet which results in rapid mixing of the content. The advantages of the proposed method in comparison to conventional mixing methods that operate based on transporting the droplet back and forth and side to side include 1) a shorter mixing time (as fast as 0.25 s), 2) the use of a fewer number of electrodes, reducing the size of the chip, and 3) the stationary nature of the technique which reduces the chance of cross-contamination and surface biofouling. Mixing using the proposed method is performed to create a uniform mixture after merging a water droplet with another droplet containing either particles or dye. The results show that increasing the frequency, and or the amplitude of the applied voltage, enhances the mixing process. However, actuation with a very high frequency and voltage may result in shedding pico-liter satellite droplets. Therefore, for each frequency there is an effective range of the amplitude which provides rapid mixing and avoids shedding satellite droplets. Also, the increase in the gap height between the two plates (for the closed DMF platforms) significantly enhances the mixing efficiency due to the lower viscous effects. Effects of the addition of salts and DNA to the samples were also studied. The electrothermal effect decreased for these cases, which was solved by increasing the frequency of the applied voltage. To assure the high frequency actuation does not increase the sample temperature excessively, the temperature change was monitored using a thermal imaging camera and it was found that the increase in temperature is negligible.
Taylor, Jennifer A; Barnes, Brittany; Davis, Andrea L; Wright, Jasmine; Widman, Shannon; LeVasseur, Michael
2016-02-01
Struck by injuries experienced by females were observed to be higher compared to males in an urban fire department. The disparity was investigated while gaining a grounded understanding of EMS responder experiences from patient-initiated violence. A convergent parallel mixed methods design was employed. Using a linked injury dataset, patient-initiated violence estimates were calculated comparing genders. Semi-structured interviews and a focus group were conducted with injured EMS responders. Paramedics had significantly higher odds for patient-initiated violence injuries than firefighters (OR 14.4, 95%CI: 9.2-22.2, P < 0.001). Females reported increased odds of patient-initiated violence injuries compared to males (OR = 6.25, 95%CI 3.8-10.2), but this relationship was entirely mediated through occupation (AOR = 1.64, 95%CI 0.94-2.85). Qualitative data illuminated the impact of patient-initiated violence and highlighted important organizational opportunities for intervention. Mixed methods greatly enhanced the assessment of EMS responder patient-initiated violence prevention. © 2016 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj
2006-01-01
A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.
Electroosmotic flow and mixing in microchannels with the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Tang, G. H.; Li, Zhuo; Wang, J. K.; He, Y. L.; Tao, W. Q.
2006-11-01
Understanding the electroosmotic flow in microchannels is of both fundamental and practical significance for the design and optimization of various microfluidic devices to control fluid motion. In this paper, a lattice Boltzmann equation, which recovers the nonlinear Poisson-Boltzmann equation, is used to solve the electric potential distribution in the electrolytes, and another lattice Boltzmann equation, which recovers the Navier-Stokes equation including the external force term, is used to solve the velocity fields. The method is validated by the electric potential distribution in the electrolytes and the pressure driven pulsating flow. Steady-state and pulsating electroosmotic flows in two-dimensional parallel uniform and nonuniform charged microchannels are studied with this lattice Boltzmann method. The simulation results show that the heterogeneous surface potential distribution and the electroosmotic pulsating flow can induce chaotic advection and thus enhance the mixing in microfluidic systems efficiently.
Cutting through the noise: an evaluative framework for research communication
NASA Astrophysics Data System (ADS)
Strickert, G. E.; Bradford, L. E.; Shantz, S.; Steelman, T.; Orozs, C.; Rose, I.
2017-12-01
With an ever-increasing amount of research, there is a parallel challenge to mobilize the research for decision making, policy development and management actions. The tradition of "loading dock" model of science to policy is under renovation, replaced by more engaging methods of research communication. Research communication falls on a continuum from passive methods (e.g. reports, social media, infographics) to more active methods (e.g. forum theatre, decision labs, and stakeholder planning, and mix media installations that blend, art, science and traditional knowledge). Drawing on a five-year water science research program in the Saskatchewan River Basin, an evaluation framework is presented that draws on a wide communities of knowledge users including: First Nation and Metis, Community Organizers, Farmers, Consultants, Researchers, and Civil Servants. A mixed method framework consisting of quantitative surveys, qualitative interviews, focus groups, and q-sorts demonstrates that participants prefer more active means of research communication to draw them into the research, but they also value more traditional and passive methods to provide more in-depth information when needed.
A dynamic bead-based microarray for parallel DNA detection
NASA Astrophysics Data System (ADS)
Sochol, R. D.; Casavant, B. P.; Dueck, M. E.; Lee, L. P.; Lin, L.
2011-05-01
A microfluidic system has been designed and constructed by means of micromachining processes to integrate both microfluidic mixing of mobile microbeads and hydrodynamic microbead arraying capabilities on a single chip to simultaneously detect multiple bio-molecules. The prototype system has four parallel reaction chambers, which include microchannels of 18 × 50 µm2 cross-sectional area and a microfluidic mixing section of 22 cm length. Parallel detection of multiple DNA oligonucleotide sequences was achieved via molecular beacon probes immobilized on polystyrene microbeads of 16 µm diameter. Experimental results show quantitative detection of three distinct DNA oligonucleotide sequences from the Hepatitis C viral (HCV) genome with single base-pair mismatch specificity. Our dynamic bead-based microarray offers an effective microfluidic platform to increase parallelization of reactions and improve microbead handling for various biological applications, including bio-molecule detection, medical diagnostics and drug screening.
Kolbe, Nina; Kugler, Christiane; Schnepp, Wilfried; Jaarsma, Tiny
2016-01-01
Patients with heart failure (HF) often worry about resuming sexual activity and may need information. Nurses have a role in helping patients to live with the consequences of HF and can be expected to discuss patients' sexual concerns. The aims of this study were to identify whether nurses discuss consequences of HF on sexuality with patients and to explore their perceived role and barriers regarding this topic. A cross-sectional research design with a convergent parallel mixed method approach was used combining qualitative and quantitative data collected with a self-reported questionnaire. Nurses in this study rarely addressed sexual issues with their patients. The nurses did not feel that discussing sexual concerns with their patients was their responsibility, and only 8% of the nurses expressed confidence to do so. The main phenomenon in discussing sexual concerns seems to be "one of silence": Neither patients nor nurses talk about sexual concerns. Factors influencing this include structural barriers, lack of knowledge and communication skills, as well as relevance of the topic and relationship to patients. Cardiac nurses in Germany rarely practice sexual counseling. It is a phenomenon that is silent. Education and skill-based training might hold potential to "break the silence."
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
A picoliter-volume mixer for microfluidic analytical systems.
He, B; Burke, B J; Zhang, X; Zhang, R; Regnier, F E
2001-05-01
Mixing confluent liquid streams is an important, but difficult operation in microfluidic systems. This paper reports the construction and characterization of a 100-pL mixer for liquids transported by electroosmotic flow. Mixing was achieved in a microfabricated device with multiple intersecting channels of varying lengths and a bimodal width distribution. All channels running parallel to the direction of flow were 5 microm in width whereas larger 27-microm-width channels ran back and forth through the parallel channel network at a 45 degrees angle. The channel network composing the mixer was approximately 10 microm deep. It was observed that little mixing of the confluent solvent streams occurred in the 100-microm-wide, 300-microm-long mixer inlet channel where mixing would be achieved almost exclusively by diffusion. In contrast, after passage through the channel network in the approximately 200-microm-length static mixer bed, mixing was complete as determined by confocal microscopy and CCD detection. Theoretical simulations were also performed in an attempt to describe the extent of mixing in microfabricated systems.
DSPCP: A Data Scalable Approach for Identifying Relationships in Parallel Coordinates.
Nguyen, Hoa; Rosen, Paul
2018-03-01
Parallel coordinates plots (PCPs) are a well-studied technique for exploring multi-attribute datasets. In many situations, users find them a flexible method to analyze and interact with data. Unfortunately, using PCPs becomes challenging as the number of data items grows large or multiple trends within the data mix in the visualization. The resulting overdraw can obscure important features. A number of modifications to PCPs have been proposed, including using color, opacity, smooth curves, frequency, density, and animation to mitigate this problem. However, these modified PCPs tend to have their own limitations in the kinds of relationships they emphasize. We propose a new data scalable design for representing and exploring data relationships in PCPs. The approach exploits the point/line duality property of PCPs and a local linear assumption of data to extract and represent relationship summarizations. This approach simultaneously shows relationships in the data and the consistency of those relationships. Our approach supports various visualization tasks, including mixed linear and nonlinear pattern identification, noise detection, and outlier detection, all in large data. We demonstrate these tasks on multiple synthetic and real-world datasets.
Fluid Structure Interaction Techniques For Extrusion And Mixing Processes
NASA Astrophysics Data System (ADS)
Valette, Rudy; Vergnes, Bruno; Coupez, Thierry
2007-05-01
This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each sub-domain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique background computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.
Development of guidelines for usage of high percent RAP in warm-mix asphalt pavements.
DOT National Transportation Integrated Search
2011-12-15
Road construction using warm-mix asphalt has been rapidly gaining popularity in the United States, in part because : WMA is believed to be friendlier to the environment as compared to hot-mix asphalt. Parallel to this rapid growth in : WMA constructi...
ERIC Educational Resources Information Center
Hatley, Leshell April Denise
2016-01-01
Today, most young people in the United States (U.S.) live technology-saturated lives. Their educational, entertainment, and career options originate from and demand incredible technological innovations. However, this extensive ownership of and access to technology does not indicate that today's youth know how technology works or how to control and…
Coaxial microreactor for particle synthesis
Bartsch, Michael; Kanouff, Michael P; Ferko, Scott M; Crocker, Robert W; Wally, Karl
2013-10-22
A coaxial fluid flow microreactor system disposed on a microfluidic chip utilizing laminar flow for synthesizing particles from solution. Flow geometries produced by the mixing system make use of hydrodynamic focusing to confine a core flow to a small axially-symmetric, centrally positioned and spatially well-defined portion of a flow channel cross-section to provide highly uniform diffusional mixing between a reactant core and sheath flow streams. The microreactor is fabricated in such a way that a substantially planar two-dimensional arrangement of microfluidic channels will produce a three-dimensional core/sheath flow geometry. The microreactor system can comprise one or more coaxial mixing stages that can be arranged singly, in series, in parallel or nested concentrically in parallel.
Mixed Potentials: Experimental Illustrations of an Important Concept in Practical Electrochemistry.
ERIC Educational Resources Information Center
Power, G. P.; Ritchie, I. M.
1983-01-01
Presents a largely experimental approach to the concept of mixed potentials, pointing out the close parallel that exists between equilibrium potentials. Describes several important examples of mixed potentials, providing current-voltage and polarization curves and half reactions as examples. Includes a discussion of corrosion reactions and…
Parallel Numerical Simulations of Water Reservoirs
NASA Astrophysics Data System (ADS)
Torres, Pedro; Mangiavacchi, Norberto
2010-11-01
The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.
100 Gbps Wireless System and Circuit Design Using Parallel Spread-Spectrum Sequencing
NASA Astrophysics Data System (ADS)
Scheytt, J. Christoph; Javed, Abdul Rehman; Bammidi, Eswara Rao; KrishneGowda, Karthik; Kallfass, Ingmar; Kraemer, Rolf
2017-09-01
In this article mixed analog/digital signal processing techniques based on parallel spread-spectrum sequencing (PSSS) and radio frequency (RF) carrier synchronization for ultra-broadband wireless communication are investigated on system and circuit level.
GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA
NASA Astrophysics Data System (ADS)
Ren, Qinlong
Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic-diffusion flames and spray flames. The EUPDF source code will be available with the National Combustion Code (NCC) as a complete package.
NASA Technical Reports Server (NTRS)
Ustinov, E. A.
1999-01-01
Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.
Evaluation of peristaltic micromixers for highly integrated microfluidic systems
Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook
2016-01-01
Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip. PMID:27036809
Evaluation of peristaltic micromixers for highly integrated microfluidic systems
NASA Astrophysics Data System (ADS)
Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook
2016-03-01
Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip.
Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes
NASA Astrophysics Data System (ADS)
Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry
2007-04-01
This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.
NASA Astrophysics Data System (ADS)
Kim, Daeik D.; Thomas, Mikkel A.; Brooke, Martin A.; Jokerst, Nan M.
2004-06-01
Arrays of embedded bipolar junction transistor (BJT) photo detectors (PD) and a parallel mixed-signal processing system were fabricated as a silicon complementary metal oxide semiconductor (Si-CMOS) circuit for the integration optical sensors on the surface of the chip. The circuit was fabricated with AMI 1.5um n-well CMOS process and the embedded PNP BJT PD has a pixel size of 8um by 8um. BJT PD was chosen to take advantage of its higher gain amplification of photo current than that of PiN type detectors since the target application is a low-speed and high-sensitivity sensor. The photo current generated by BJT PD is manipulated by mixed-signal processing system, which consists of parallel first order low-pass delta-sigma oversampling analog-to-digital converters (ADC). There are 8 parallel ADCs on the chip and a group of 8 BJT PDs are selected with CMOS switches. An array of PD is composed of three or six groups of PDs depending on the number of rows.
Rattan, Jesse; Noznesky, Elizabeth; Curry, Dora Ward; Galavotti, Christine; Hwang, Shuyuan; Rodriguez, Mariela
2016-08-11
The global health community has recognized that expanding the contraceptive method mix is a programmatic imperative since (1) one-third of unintended pregnancies are due to method failure or discontinuation, and (2) the addition of a new method to the existing mix tends to increase total contraceptive use. Since July 2011, CARE has been implementing the Supporting Access to Family Planning and Post-Abortion Care (SAFPAC) initiative to increase the availability, quality, and use of contraception, with a particular focus on highly effective and long-acting reversible methods-intrauterine devices (IUDs) and implants-in crisis-affected settings in Chad and the Democratic Republic of the Congo (DRC). This initiative supports government health systems at primary and referral levels to provide a wide range of contraceptive services to people affected by conflict and/or displacement. Before the initiative, long-acting reversible methods were either unknown or unavailable in the intervention areas. However, as soon as trained providers were in place, we noted a dramatic and sustained increase in new users of all contraceptive methods, especially implants, with total new clients reaching 82,855, or 32% of the estimated number of women of reproductive age in the respective catchment areas in both countries, at the end of the fourth year. Demand for implants was very strong in the first 6 months after provider training. During this time, implants consistently accounted for more than 50% of the method mix, reaching as high as 89% in Chad and 74% in DRC. To ensure that all clients were getting the contraceptive method of their choice, we conducted a series of discussions and sought feedback from different stakeholders in order to modify program strategies. Key program modifications included more focused communication in mass media, community, and interpersonal channels about the benefits of IUDs while reinforcing the wide range of methods available and refresher training for providers on how to insert IUDs to strengthen their competence and confidence. Over time, we noted a gradual redistribution of the method mix in parallel with vigorous continued family planning uptake. This experience suggests that analyzing method mix can be helpful for designing program strategies and that expanding method choice can accelerate satisfying demand, especially in environments with high unmet need for contraception. © Rattan et al.
Rattan, Jesse; Noznesky, Elizabeth; Curry, Dora Ward; Galavotti, Christine; Hwang, Shuyuan; Rodriguez, Mariela
2016-01-01
ABSTRACT The global health community has recognized that expanding the contraceptive method mix is a programmatic imperative since (1) one-third of unintended pregnancies are due to method failure or discontinuation, and (2) the addition of a new method to the existing mix tends to increase total contraceptive use. Since July 2011, CARE has been implementing the Supporting Access to Family Planning and Post-Abortion Care (SAFPAC) initiative to increase the availability, quality, and use of contraception, with a particular focus on highly effective and long-acting reversible methods—intrauterine devices (IUDs) and implants—in crisis-affected settings in Chad and the Democratic Republic of the Congo (DRC). This initiative supports government health systems at primary and referral levels to provide a wide range of contraceptive services to people affected by conflict and/or displacement. Before the initiative, long-acting reversible methods were either unknown or unavailable in the intervention areas. However, as soon as trained providers were in place, we noted a dramatic and sustained increase in new users of all contraceptive methods, especially implants, with total new clients reaching 82,855, or 32% of the estimated number of women of reproductive age in the respective catchment areas in both countries, at the end of the fourth year. Demand for implants was very strong in the first 6 months after provider training. During this time, implants consistently accounted for more than 50% of the method mix, reaching as high as 89% in Chad and 74% in DRC. To ensure that all clients were getting the contraceptive method of their choice, we conducted a series of discussions and sought feedback from different stakeholders in order to modify program strategies. Key program modifications included more focused communication in mass media, community, and interpersonal channels about the benefits of IUDs while reinforcing the wide range of methods available and refresher training for providers on how to insert IUDs to strengthen their competence and confidence. Over time, we noted a gradual redistribution of the method mix in parallel with vigorous continued family planning uptake. This experience suggests that analyzing method mix can be helpful for designing program strategies and that expanding method choice can accelerate satisfying demand, especially in environments with high unmet need for contraception. PMID:27540125
NASA Astrophysics Data System (ADS)
Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul
Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.
Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction
NASA Astrophysics Data System (ADS)
Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.
1996-11-01
Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?
Dahlberg, K; Odencrants, S; Hagberg, L
2016-01-01
Introduction Day surgery is a well-established practice in many European countries, but only limited information is available regarding postoperative recovery at home though there is a current lack of a standard procedure regarding postoperative follow-up. Furthermore, there is also a need for improvement of modern technology in assessing patient-related outcomes such as mobile applications. This article describes the Recovery Assessment by Phone Points (RAPP) study protocol, a mixed-methods study to evaluate if a systematic e-assessment follow-up in patients undergoing day surgery is cost-effective and improves postoperative recovery, health and quality of life. Methods and analysis This study has a mixed-methods study design that includes a multicentre, two-group, parallel, single-blind randomised controlled trial and qualitative interview studies. 1000 patients >17 years of age who are undergoing day surgery will be randomly assigned to either e-assessed postoperative recovery follow-up daily in 14 days measured via smartphone app including the Swedish web-version of Quality of Recovery (SwQoR) or to standard care (ie, no follow-up). The primary aim is cost-effectiveness. Secondary aims are (A) to explore whether a systematic e-assessment follow-up after day surgery has a positive effect on postoperative recovery, health-related quality of life (QoL) and overall health; (B) to determine whether differences in postoperative recovery have an association with patient characteristic, type of surgery and anaesthesia; (C) to determine whether differences in health literacy have a substantial and distinct effect on postoperative recovery, health and QoL; and (D) to describe day surgery patient and staff experiences with a systematic e-assessment follow-up after day surgery. The primary aim will be measured at 2 weeks postoperatively and secondary outcomes (A–C) at 1 and 2 weeks and (D) at 1 and 4 months. Trial registration number NCT02492191; Pre-results. PMID:26769788
Method of repairing discontinuity in fiberglass structures
NASA Technical Reports Server (NTRS)
Gelb, L. L.; Helbert, W. B., Jr.; Enie, R. B.; Mulliken, R. F. (Inventor)
1974-01-01
Damaged fiberglass structures are repaired by substantially filling the irregular surfaced damaged area with a liquid, self-curing resin, preferably an epoxy resin mixed with chopped fiberglass, and then applying to the resin surface the first of several woven fiberglass swatches which has stitching in a zig-zag pattern parallel to each of its edges and a fringe of warp and fill glass fibers about the edges outward of the stitching. The method is especially applicable to repair of fiberglass rocket engine casings and is particularly advantageous since it restores the repaired fiberglass structure to substantially its original strength without any significant changes in the geometry or mass of the structure.
Coherent radio-frequency detection for narrowband direct comb spectroscopy.
Anstie, James D; Perrella, Christopher; Light, Philip S; Luiten, Andre N
2016-02-22
We demonstrate a scheme for coherent narrowband direct optical frequency comb spectroscopy. An extended cavity diode laser is injection locked to a single mode of an optical frequency comb, frequency shifted, and used as a local oscillator to optically down-mix the interrogating comb on a fast photodetector. The high spectral coherence of the injection lock generates a microwave frequency comb at the output of the photodiode with very narrow features, enabling spectral information to be further down-mixed to RF frequencies, allowing optical transmittance and phase to be obtained using electronics commonly found in the lab. We demonstrate two methods for achieving this step: a serial mode-by-mode approach and a parallel dual-comb approach, with the Cs D1 transition at 894 nm as a test case.
GPU implementation of the simplex identification via split augmented Lagrangian
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Nascimento, José M. P.
2015-10-01
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
NASA Technical Reports Server (NTRS)
Chung, T. J. (Editor); Karr, Gerald R. (Editor)
1989-01-01
Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.
NASA Astrophysics Data System (ADS)
Stepanova, L. V.
2017-12-01
Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is the Embedded Atom Method (EAM) potential. Plane specimens with an initial central crack are subjected to mixed-mode loadings. The simulation cell contains 400,000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide range of temperatures (from 0.1 K to 800 K) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields. The multi-parameter fracture criteria are based on the multi-parameter stress field description taking into account the higher order terms of the Williams series expansion of the crack tip fields.
Spindler, Esther; Bitar, Nisreen; Solo, Julie; Menstell, Elizabeth; Shattuck, Dominick
2017-01-01
Health practitioners, researchers, and donors are stumped about Jordan's stalled fertility rate, which has stagnated between 3.7 and 3.5 children per woman from 2002 to 2012, above the national replacement level of 2.1. This stall paralleled United States Agency for International Development (USAID) funding investments in family planning in Jordan, triggering an assessment of USAID family planning programming in Jordan. This article describes the methods, results, and implications of the programmatic assessment. Methods included an extensive desk review of USAID programs in Jordan and 69 interviews with reproductive health stakeholders. We explored reasons for fertility stagnation in Jordan's total fertility rate (TFR) and assessed the effects of USAID programming on family planning outcomes over the same time period. The assessment results suggest that the increased use of less effective methods, in particular withdrawal and condoms, are contributing to Jordan's TFR stall. Jordan's limited method mix, combined with strong sociocultural determinants around reproduction and fertility desires, have contributed to low contraceptive effectiveness in Jordan. Over the same time period, USAID contributions toward increasing family planning access and use, largely focused on service delivery programs, were extensive. Examples of effective initiatives, among others, include task shifting of IUD insertion services to midwives due to a shortage of female physicians. However, key challenges to improved use of family planning services include limited government investments in family planning programs, influential service provider behaviors and biases that limit informed counseling and choice, pervasive strong social norms of family size and fertility, and limited availability of different contraceptive methods. In contexts where sociocultural norms and a limited method mix are the dominant barriers toward improved family planning use, increased national government investments toward synchronized service delivery and social and behavior change activities may be needed to catalyze national-level improvements in family planning outcomes. PMID:29284697
Predicting the stability of a compressible periodic parallel jet flow
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H.
1996-01-01
It is known that mixing enhancement in compressible free shear layer flows with high convective Mach numbers is difficult. One design strategy to get around this is to use multiple nozzles. Extrapolating this design concept in a one dimensional manner, one arrives at an array of parallel rectangular nozzles where the smaller dimension is omega and the longer dimension, b, is taken to be infinite. In this paper, the feasibility of predicting the stability of this type of compressible periodic parallel jet flow is discussed. The problem is treated using Floquet-Bloch theory. Numerical solutions to this eigenvalue problem are presented. For the case presented, the interjet spacing, s, was selected so that s/omega =2.23. Typical plots of the eigenvalue and stability curves are presented. Results obtained for a range of convective Mach numbers from 3 to 5 show growth rates omega(sub i)=kc(sub i)/2 range from 0.25 to 0.29. These results indicate that coherent two-dimensional structures can occur without difficulty in multiple parallel periodic jet nozzles and that shear layer mixing should occur with this type of nozzle design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
Glover, William A; Atienza, Ederlyn E; Nesbitt, Shannon; Kim, Woo J; Castor, Jared; Cook, Linda; Jerome, Keith R
2016-01-01
Quantitative DNA detection of cytomegalovirus (CMV) and BK virus (BKV) is critical in the management of transplant patients. Quantitative laboratory-developed procedures for CMV and BKV have been described in which much of the processing is automated, resulting in rapid, reproducible, and high-throughput testing of transplant patients. To increase the efficiency of such assays, the performance and stability of four commercial preassembled frozen fast qPCR master mixes (Roche FastStart Universal Probe Master Mix with Rox, Bio-Rad SsoFast Probes Supermix with Rox, Life Technologies TaqMan FastAdvanced Master Mix, and Life Technologies Fast Universal PCR Master Mix), in combination with in-house designed primers and probes, was evaluated using controls and standards from standard CMV and BK assays. A subsequent parallel evaluation using patient samples was performed comparing the performance of freshly prepared assay mixes versus aliquoted frozen master mixes made with two of the fast qPCR mixes (Life Technologies TaqMan FastAdvanced Master Mix, and Bio-Rad SsoFast Probes Supermix with Rox), chosen based on their performance and compatibility with existing PCR cycling conditions. The data demonstrate that the frozen master mixes retain excellent performance over a period of at least 10 weeks. During the parallel testing using clinical specimens, no difference in quantitative results was observed between the preassembled frozen master mixes and freshly prepared master mixes. Preassembled fast real-time qPCR frozen master mixes perform well and represent an additional strategy laboratories can implement to reduce assay preparation times, and to minimize technical errors and effort necessary to perform clinical PCR. © 2015 Wiley Periodicals, Inc.
LSPRAY-IV: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2012-01-01
LSPRAY-IV is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray. Some important research areas covered as a part of the code development are: (1) the extension of combined CFD/scalar-Monte- Carlo-PDF method to spray modeling, (2) the multi-component liquid spray modeling, and (3) the assessment of various atomization models used in spray calculations. The current version contains the extension to the modeling of superheated sprays. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers.
MPI parallelization of Vlasov codes for the simulation of nonlinear laser-plasma interactions
NASA Astrophysics Data System (ADS)
Savchenko, V.; Won, K.; Afeyan, B.; Decyk, V.; Albrecht-Marc, M.; Ghizzo, A.; Bertrand, P.
2003-10-01
The simulation of optical mixing driven KEEN waves [1] and electron plasma waves [1] in laser-produced plasmas require nonlinear kinetic models and massive parallelization. We use Massage Passing Interface (MPI) libraries and Appleseed [2] to solve the Vlasov Poisson system of equations on an 8 node dual processor MAC G4 cluster. We use the semi-Lagrangian time splitting method [3]. It requires only row-column exchanges in the global data redistribution, minimizing the total number of communications between processors. Recurrent communication patterns for 2D FFTs involves global transposition. In the Vlasov-Maxwell case, we use splitting into two 1D spatial advections and a 2D momentum advection [4]. Discretized momentum advection equations have a double loop structure with the outer index being assigned to different processors. We adhere to a code structure with separate routines for calculations and data management for parallel computations. [1] B. Afeyan et al., IFSA 2003 Conference Proceedings, Monterey, CA [2] V. K. Decyk, Computers in Physics, 7, 418 (1993) [3] Sonnendrucker et al., JCP 149, 201 (1998) [4] Begue et al., JCP 151, 458 (1999)
Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations
NASA Astrophysics Data System (ADS)
Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean
2017-10-01
Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.
Separation/extraction, detection, and interpretation of DNA mixtures in forensic science (review).
Tao, Ruiyang; Wang, Shouyu; Zhang, Jiashuo; Zhang, Jingyi; Yang, Zihao; Sheng, Xiang; Hou, Yiping; Zhang, Suhua; Li, Chengtao
2018-05-25
Interpreting mixed DNA samples containing material from multiple contributors has long been considered a major challenge in forensic casework, especially when encountering low-template DNA (LT-DNA) or high-order mixtures that may involve missing alleles (dropout) and unrelated alleles (drop-in), among others. In the last decades, extraordinary progress has been made in the analysis of mixed DNA samples, which has led to increasing attention to this research field. The advent of new methods for the separation and extraction of DNA from mixtures, novel or jointly applied genetic markers for detection and reliable interpretation approaches for estimating the weight of evidence, as well as the powerful massively parallel sequencing (MPS) technology, has greatly extended the range of mixed samples that can be correctly analyzed. Here, we summarized the investigative approaches and progress in the field of forensic DNA mixture analysis, hoping to provide some assistance to forensic practitioners and to promote further development involving this issue.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.
1997-01-01
Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.
Multiphase three-dimensional direct numerical simulation of a rotating impeller with code Blue
NASA Astrophysics Data System (ADS)
Kahouadji, Lyes; Shin, Seungwon; Chergui, Jalel; Juric, Damir; Craster, Richard V.; Matar, Omar K.
2017-11-01
The flow driven by a rotating impeller inside an open fixed cylindrical cavity is simulated using code Blue, a solver for massively-parallel simulations of fully three-dimensional multiphase flows. The impeller is composed of four blades at a 45° inclination all attached to a central hub and tube stem. In Blue, solid forms are constructed through the definition of immersed objects via a distance function that accounts for the object's interaction with the flow for both single and two-phase flows. We use a moving frame technique for imposing translation and/or rotation. The variation of the Reynolds number, the clearance, and the tank aspect ratio are considered, and we highlight the importance of the confinement ratio (blade radius versus the tank radius) in the mixing process. Blue uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of a hybrid front-tracking/level-set method designed complex interfacial topological changes. Parallel GMRES and multigrid iterative solvers are applied to the linear systems arising from the implicit solution for the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across fluid phases. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.
1995-01-01
High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.
PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization
NASA Astrophysics Data System (ADS)
Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh
2017-05-01
Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.
Anomalous structural transition of confined hard squares.
Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo
2016-11-01
Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.
Interconnect-free parallel logic circuits in a single mechanical resonator
Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.
2011-01-01
In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator. PMID:21326230
Interconnect-free parallel logic circuits in a single mechanical resonator.
Mahboob, I; Flurin, E; Nishiguchi, K; Fujiwara, A; Yamaguchi, H
2011-02-15
In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
NASA Astrophysics Data System (ADS)
Bellentani, Laura; Beggi, Andrea; Bordone, Paolo; Bertoni, Andrea
2018-05-01
We present a numerical study of a multichannel electronic Mach-Zehnder interferometer, based on magnetically driven noninteracting edge states. The electron path is defined by a full-scale potential landscape on the two-dimensional electron gas at filling factor 2, assuming initially only the first Landau level as filled. We tailor the two beamsplitters with 50 % interchannel mixing and measure Aharonov-Bohm oscillations in the transmission probability of the second channel. We perform time-dependent simulations by solving the electron Schrödinger equation through a parallel implementation of the split-step Fourier method, and we describe the charge-carrier wave function as a Gaussian wave packet of edge states. We finally develop a simplified theoretical model to explain the features observed in the transmission probability, and we propose possible strategies to optimize gate performances.
Smith, Aaron Douglas; Lockman, Nur Ain; Holtzapple, Mark T
2011-06-01
Nutrients are essential for microbial growth and metabolism in mixed-culture acid fermentations. Understanding the influence of nutrient feeding strategies on fermentation performance is necessary for optimization. For a four-bottle fermentation train, five nutrient contacting patterns (single-point nutrient addition to fermentors F1, F2, F3, and F4 and multi-point parallel addition) were investigated. Compared to the traditional nutrient contacting method (all nutrients fed to F1), the near-optimal feeding strategies improved exit yield, culture yield, process yield, exit acetate-equivalent yield, conversion, and total acid productivity by approximately 31%, 39%, 46%, 31%, 100%, and 19%, respectively. There was no statistical improvement in total acid concentration. The traditional nutrient feeding strategy had the highest selectivity and acetate-equivalent selectivity. Total acid productivity depends on carbon-nitrogen ratio.
Finlay, Jessica M; Kobayashi, Lindsay C
2018-07-01
Social isolation and loneliness are increasingly prevalent among older adults in the United States, with implications for morbidity and mortality risk. Little research to date has examined the complex person-place transactions that contribute to social well-being in later life. This study aimed to characterize personal and neighborhood contextual influences on social isolation and loneliness among older adults. Interviews were conducted with independent-dwelling men and women (n = 124; mean age 71 years) in the Minneapolis metropolitan area (USA) from June to October, 2015. A convergent mixed-methods design was applied, whereby quantitative and qualitative approaches were used in parallel to gain simultaneous insights into statistical associations and in-depth individual perspectives. Logistic regression models predicted self-reported social isolation and loneliness, adjusted for age, gender, past occupation, race/ethnicity, living alone, street type, residential location, and residential density. Qualitative thematic analyses of interview transcripts probed individual experiences with social isolation and loneliness. The quantitative results suggested that African American adults, those with a higher socioeconomic status, those who did not live alone, and those who lived closer to the city center were less likely to report feeling socially isolated or lonely. The qualitative results identified and explained variation in outcomes within each of these factors. They provided insight on those who lived alone but did not report feeling lonely, finding that solitude was sought after and enjoyed by a portion of participants. Poor physical and mental health often resulted in reporting social isolation, particularly when coupled with poor weather or low-density neighborhoods. At the same time, poor health sometimes provided opportunities for valued social engagement with caregivers, family, and friends. The combination of group-level risk factors and in-depth personal insights provided by this mixed-methodology may be useful to develop strategies that address social isolation and loneliness in older communities. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
Bayesian mixture analysis for metagenomic community profiling.
Morfopoulou, Sofia; Plagnol, Vincent
2015-09-15
Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
Nilsson, U; Jaensson, M; Dahlberg, K; Odencrants, S; Grönlund, Å; Hagberg, L; Eriksson, M
2016-01-13
Day surgery is a well-established practice in many European countries, but only limited information is available regarding postoperative recovery at home though there is a current lack of a standard procedure regarding postoperative follow-up. Furthermore, there is also a need for improvement of modern technology in assessing patient-related outcomes such as mobile applications. This article describes the Recovery Assessment by Phone Points (RAPP) study protocol, a mixed-methods study to evaluate if a systematic e-assessment follow-up in patients undergoing day surgery is cost-effective and improves postoperative recovery, health and quality of life. This study has a mixed-methods study design that includes a multicentre, two-group, parallel, single-blind randomised controlled trial and qualitative interview studies. 1000 patients >17 years of age who are undergoing day surgery will be randomly assigned to either e-assessed postoperative recovery follow-up daily in 14 days measured via smartphone app including the Swedish web-version of Quality of Recovery (SwQoR) or to standard care (ie, no follow-up). The primary aim is cost-effectiveness. Secondary aims are (A) to explore whether a systematic e-assessment follow-up after day surgery has a positive effect on postoperative recovery, health-related quality of life (QoL) and overall health; (B) to determine whether differences in postoperative recovery have an association with patient characteristic, type of surgery and anaesthesia; (C) to determine whether differences in health literacy have a substantial and distinct effect on postoperative recovery, health and QoL; and (D) to describe day surgery patient and staff experiences with a systematic e-assessment follow-up after day surgery.The primary aim will be measured at 2 weeks postoperatively and secondary outcomes (A-C) at 1 and 2 weeks and (D) at 1 and 4 months. NCT02492191; Pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Evaluating Statistical Targets for Assembling Parallel Mixed-Format Test Forms
ERIC Educational Resources Information Center
Debeer, Dries; Ali, Usama S.; van Rijn, Peter W.
2017-01-01
Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…
NASA Astrophysics Data System (ADS)
Pacheco, Luz; Smith, Katherine; Hamlington, Peter; Niemeyer, Kyle
2017-11-01
Vertical transport flux in the ocean upper mixed layer has recently been attributed to submesoscale currents, which occur at scales on the order of kilometers in the horizontal direction. These phenomena, which include fronts and mixed-layer instabilities, have been of particular interest due to the effect of turbulent mixing on nutrient transport, facilitating phytoplankton blooms. We study these phenomena using a non-hydrostatic, large eddy simulation for submesoscale currents in the ocean, developed using the extensible, open-source finite element platform FEniCs. Our model solves the standard Boussinesq Euler equations in variational form using the finite element method. FEniCs enables the use of parallel computing on modern systems for efficient computing time, and is suitable for unstructured grids where irregular topography can be considered in the future. The solver will be verified against the well-established NCAR-LES model and validated against observational data. For the verification with NCAR-LES, the velocity, pressure, and buoyancy fields are compared through a surface-wind-driven, open-ocean case. We use this model to study the impacts of uncertainties in the model parameters, such as near-surface buoyancy flux and secondary circulation, and discuss implications.
A new conformal absorbing boundary condition for finite element meshes and parallelization of FEMATS
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Volakis, J. L.; Nguyen, J.; Nurnberger, M.; Ross, D.
1993-01-01
Some of the progress toward the development and parallelization of an improved version of the finite element code FEMATS is described. This is a finite element code for computing the scattering by arbitrarily shaped three dimensional surfaces composite scatterers. The following tasks were worked on during the report period: (1) new absorbing boundary conditions (ABC's) for truncating the finite element mesh; (2) mixed mesh termination schemes; (3) hierarchical elements and multigridding; (4) parallelization; and (5) various modeling enhancements (antenna feeds, anisotropy, and higher order GIBC).
Novel molecular targets for kRAS downregulation: promoter G-quadruplexes
2016-11-01
conditions, and described the structure as having mixed parallel/anti-parallel loops of lengths 2:8:10 in the 5’-3’ direction. Using selective small...and anti-parallel loop directionality of lengths 4:10:8 in the 5’–3’ direction, three tetrads stacked, and involving guanines in runs B, C, E, and F...a tri-stacked structure incorporating runs B, C, E and F with intervening loops of 2, 10, and 8 bases in the 5’–3’ direction. G = black circles, C
NASA Technical Reports Server (NTRS)
Raju, Manthena S.
1998-01-01
Sprays occur in a wide variety of industrial and power applications and in the processing of materials. A liquid spray is a phase flow with a gas as the continuous phase and a liquid as the dispersed phase (in the form of droplets or ligaments). Interactions between the two phases, which are coupled through exchanges of mass, momentum, and energy, can occur in different ways at different times and locations involving various thermal, mass, and fluid dynamic factors. An understanding of the flow, combustion, and thermal properties of a rapidly vaporizing spray requires careful modeling of the rate-controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as other phenomena. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on sprays to unstructured grids and parallel computing. LSPRAY, which was developed by M.S. Raju of Nyma, Inc., is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo probability density function (PDF) solver. The LSPRAY solver accommodates the use of an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements in the gas-phase solvers. It is used specifically for fuel sprays within gas turbine combustors, but it has many other uses. The spray model used in LSPRAY provided favorable results when applied to stratified-charge rotary combustion (Wankel) engines and several other confined and unconfined spray flames. The source code will be available with the National Combustion Code (NCC) as a complete package.
Brief report: Bereaved parents informing research design: The place of a pilot study.
Donovan, L A; Wakefield, C E; Russell, V; Hetherington, Kate; Cohn, R J
2018-02-23
Risk minimization in research with bereaved parents is important. However, little is known about which research methods balance the sensitivity required for bereaved research participants and the need for generalizable results. To explore parental experiences of participating in mixed method bereavement research via a pilot study. A convergent parallel mixed method design assessing bereaved parents' experience of research participation. Eleven parents whose child was treated for cancer at The Royal Children's Hospital, Brisbane completed the questionnaire/interview being piloted (n = 8 mothers; n = 3 fathers; >6 months and <6 years bereaved). Of these, eight parents completed the pilot study evaluation questionnaire, providing feedback on their experience of participation. Participants acknowledged the importance of bereaved parents being central to research design and the development of bereavement programs. Sixty-three per cent (n = 5/8) of parents described completion of the questionnaire as 'not at all/a little bit' of a burden. Seventy-five per cent (n = 6/8) of parents opting into the telephone interview described participation as 'not at all/a little bit' of a burden. When considering the latest timeframes for participation in bereavement research 63% (n = 5/8) of parents indicated 'no endpoint.' Findings from the pilot study enabled important adjustments to be made to a large-scale future study. As a research method, pilot studies may be utilized to minimize harm and maximize the potential benefits for vulnerable research participants. A mixed method approach allows researchers to generalize findings to a broader population while also drawing on the depth of the lived experience.
Further development of imaging near-field scatterometer
NASA Astrophysics Data System (ADS)
Uebeler, Denise; Pescoller, Lukas; Hahlweg, Cornelius
2015-09-01
In continuation of last year's paper on the use of near field imaging, which basically is a reflective shadowgraph method, for characterization of glossy surfaces like printed matter or laminated material, further developments are discussed. Beside the identification of several types of surfaces and related features, for which the method is applicable, several refinements are introduced. The theory of the method is extended, based on a mixed Fourier optical and geometrical approach, leading to rules of thumb for the resolution to be expected, giving a framework for design. Further, a refined experimental set-up is introduced. Variation of plane of focus and incident angle are used for separation of various the images of he layers of the surface under test, cross and parallel polarization techniques are applied. Finally, exemplary measurement results and examples are included.
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics.
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control.
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control. PMID:26601042
Cold Electrons as the Drivers of Parallel, Electrostatic Waves in Asymmetric Reconnection
NASA Astrophysics Data System (ADS)
Holmes, J.; Ergun, R.; Newman, D. L.; Wilder, F. D.; Schwartz, S. J.; Goodrich, K.; Eriksson, S.; Torbert, R. B.; Russell, C. T.; Lindqvist, P. A.; Giles, B. L.; Pollock, C. J.; Le Contel, O.; Strangeway, R. J.; Burch, J. L.
2016-12-01
The Magnetospheric MultiScale mission (MMS) has observed several instances of asymmetric reconnection at Earth's magnetopause, where plasma from the magnetosheath encounters that of the magnetosphere. On Earth's dayside, the magnetosphere is often made up of a two-component distribution of cold (<< 10 eV) and hot ( 1 keV) plasma, sometimes including the cold ion plume. Magnetosheath plasma is primarily warm ( 100 eV) post-shock solar wind. Where they meet, magnetopause reconnection alters the magnetic topology such that these two populations are left cohabiting a field line and rapidly mix. There have been several events observed by MMS where the Fast Plasma Instrument (FPI) clearly shows cold ions near the diffusion region impinging upon the warm magnetosheath population. In many of these, we also see patches of strong electrostatic waves parallel to the magnetic field - a smoking gun for rapid mixing via nonlinear processes. Cold ions alone are too slow to create the same waves; solving for roots of a simplified dispersion relation shows the electron population damps out the ion modes. From this, we infer the presence of cold electrons; in one notable case found by Wilder et al. 2016 (in review), they have been observed directly by FPI. Vlasov simulations of plasma mixing for a number of these events closely reproduce the observed electric field signatures. We conclude from numerical analysis and direct MMS observations that cold plasma mixing, including cold electrons, is the primary driver of parallel electrostatic waves observed near the electron diffusion region in asymmetric magnetic reconnection.
Towards implementing coordinated healthy lifestyle promotion in primary care: a mixed method study.
Thomas, Kristin; Bendtsen, Preben; Krevers, Barbro
2015-01-01
Primary care is increasingly being encouraged to integrate healthy lifestyle promotion in routine care. However, implementation has been suboptimal. Coordinated care could facilitate lifestyle promotion practice but more empirical knowledge is needed about the implementation process of coordinated care initiatives. This study aimed to evaluate the implementation of a coordinated healthy lifestyle promotion initiative in a primary care setting. A mixed method, convergent, parallel design was used. Three primary care centres took part in a two-year research project. Data collection methods included individual interviews, document data and questionnaires. The General Theory of Implementation was used as a framework in the analysis to integrate the data sources. Multi-disciplinary teams were implemented in the centres although the role of the teams as a resource for coordinated lifestyle promotion was not fully embedded at the centres. Embedding of the teams was challenged by differences among the staff, patients and team members on resources, commitment, social norms and roles. The study highlights the importance of identifying and engaging key stakeholders early in an implementation process. The findings showed how the development phase influenced the implementation and embedding processes, which add aspects to the General Theory of Implementation.
An X-ray transparent microfluidic platform for screening of the phase behavior of lipidic mesophases
Khvostichenko, Daria S.; Kondrashkina, Elena; Perry, Sarah L.; Pawate, Ashtamurthy S.; Brister, Keith
2013-01-01
Lipidic mesophases are a class of highly ordered soft materials that form when certain lipids are mixed with water. Understanding the relationship between the composition and the microstructure of mesophases is necessary for fundamental studies of self-assembly in amphiphilic systems and for applications, such as crystallization of membrane proteins. However, the laborious formulation protocol for highly viscous mesophases and the large amounts of material required for sample formulation are significant obstacles in such studies. Here we report a microfluidic platform that facilitates investigations of the phase behavior of mesophases by reducing sample consumption, and automating and parallelizing sample formulation. The mesophases were formulated on-chip using less than 40 nL of material per sample and their microstructure was analyzed in situ using small-angle X-ray scattering (SAXS). The 220 μm-thick X-ray compatible platform was comprised of thin polydimethylsiloxane (PDMS) layers sandwiched between cyclic olefin copolymer (COC) sheets. Uniform mesophases were prepared using an active on-chip mixing strategy coupled with periodic cooling of the sample to reduce the viscosity. We validated the platform by preparing and analyzing mesophases of lipid monoolein (MO) mixed with aqueous solutions of different concentrations of β-octylglucoside (βOG), a detergent frequently used in membrane protein crystallization. Four samples were prepared in parallel on chip, by first metering and automatically diluting βOG to obtain detergent solutions of different concentration, then metering MO, and finally mixing by actuation of pneumatic valves. Integration of detergent dilution and subsequent mixing significantly reduced the number of manual steps needed for sample preparation. Three different types of mesophases typical for monoolein were successfully identified in SAXS data from on-chip samples. Microstructural parameters of identical samples formulated in different chips showed excellent agreement. Phase behavior observed on-chip corresponded well with that of samples prepared via the traditional coupled-syringe method (“off-chip”) using 300-fold larger amount of material, further validating the utility of the microfluidic platform for on-chip characterization of mesophase behavior. PMID:23882463
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
NASA Technical Reports Server (NTRS)
Bain, D. B.; Smith, C. E.; Holdeman, J. D.
1992-01-01
A CFD study was performed to analyze the mixing potential of opposed rows of staggered jets injected into confined crossflow in a rectangular duct. Three jet configurations were numerically tested: (1) straight (0 deg) slots; (2) perpendicular slanted (45 deg) slots angled in opposite directions on top and bottom walls; and (3) parallel slanted (45 deg) slots angled in the same direction on top and bottom walls. All three configurations were tested at slot spacing-to-duct height ratios (S/H) of 0.5, 0.75, and 1.0; a jet-to-mainstream momentum flux ratio (J) of 100; and a jet-to-mainstream mass flow ratio of 0.383. Each configuration had its best mixing performance at S/H of 0.75. Asymmetric flow patterns were expected and predicted for all slanted slot configurations. The parallel slanted slot configuration was the best overall configuration at x/H of 1.0 for S/H of 0.75.
Dattilio, Frank M; Edwards, David J A; Fishman, Daniel B
2010-12-01
This article addresses the long-standing divide between researchers and practitioners in the field of psychotherapy, regarding what really works in treatment and the extent to which interventions should be governed by outcomes generated in a "laboratory atmosphere." This alienation has its roots in a positivist paradigm, which is epistemologically incomplete because it fails to provide for context-based practical knowledge. In other fields of evaluation research, it has been superseded by a mixed methods paradigm, which embraces pragmatism and multiplicity. On the basis of this paradigm, we propose and illustrate new scientific standards for research on the evaluation of psychotherapeutic treatments. These include the requirement that projects should comprise several parallel studies that involve randomized controlled trials, qualitative examinations of the implementation of treatment programs, and systematic case studies. The uniqueness of this article is that it contributes a guideline for involving a set of complementary publications, including a review that offers an overall synthesis of the findings from different methodological approaches. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Method and apparatus for second-rank tensor generation
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1991-01-01
A method and apparatus are disclosed for generation of second-rank tensors using a photorefractive crystal to perform the outer-product between two vectors via four-wave mixing, thereby taking 2n input data to a control n squared output data points. Two orthogonal amplitude modulated coherent vector beams x and y are expanded and then parallel sides of the photorefractive crystal in exact opposition. A beamsplitter is used to direct a coherent pumping beam onto the crystal at an appropriate angle so as to produce a conjugate beam that is the matrix product of the vector beam that propagates in the exact opposite direction from the pumping beam. The conjugate beam thus separated is the tensor output xy (sup T).
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
GRADSPMHD: A parallel MHD code based on the SPH formalism
NASA Astrophysics Data System (ADS)
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Stenger, Kristen M; Ritter-Gooder, Paula K; Perry, Christina; Albrecht, Julie A
2014-12-01
Children are at a higher risk for foodborne illness. The objective of this study was to explore food safety knowledge, beliefs and practices among Hispanic families with young children (≤10 years of age) living within a Midwestern state. A convergent mixed methods design collected qualitative and quantitative data in parallel. Food safety knowledge surveys were administered (n = 90) prior to exploration of beliefs and practices among six focus groups (n = 52) conducted by bilingual interpreters in community sites in five cities/towns. Descriptive statistics determined knowledge scores and thematic coding unveiled beliefs and practices. Data sets were merged to assess concordance. Participants were female (96%), 35.7 (±7.6) years of age, from Mexico (69%), with the majority having a low education level. Food safety knowledge was low (56% ± 11). Focus group themes were: Ethnic dishes popular, Relating food to illness, Fresh food in home country, Food safety practices, and Face to face learning. Mixed method analysis revealed high self confidence in preparing food safely with low safe food handling knowledge and the presence of some cultural beliefs. On-site Spanish classes and materials were preferred venues for food safety education. Bilingual food safety messaging targeting common ethnic foods and cultural beliefs and practices is indicated to lower the risk of foodborne illness in Hispanic families with young children. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Histological analysis of the structural composition of ankle ligaments.
Rein, Susanne; Hagert, Elisabet; Schneiders, Wolfgang; Fieguth, Armin; Zwipp, Hans
2015-02-01
Various ankle ligaments have different structural composition. The aim of this study was to analyze the morphological structure of ankle ligaments to further understand their function in ankle stability. One hundred forty ligaments from 10 fresh-frozen cadaver ankle joints were dissected: the calcaneofibular, anterior, and posterior talofibular ligaments; the inferior extensor retinaculum, the talocalcaneal oblique ligament, the canalis tarsi ligament; the deltoid ligament; and the anterior tibiofibular ligament. Hematoxylin-eosin and Elastica van Gieson stains were used for determination of tissue morphology. Three different morphological compositions were identified: dense, mixed, and interlaced compositions. Densely packed ligaments, characterized by parallel bundles of collagen, were primarily seen in the lateral region, the canalis tarsi, and the anterior tibiofibular ligaments. Ligaments with mixed tight and loose parallel bundles of collagenous connective tissue were mainly found in the inferior extensor retinaculum and talocalcaneal oblique ligament. Densely packed and fiber-rich interlacing collagen was primarily seen in the areas of ligament insertion into bone of the deltoid ligament. Ligaments of the lateral region, the canalis tarsi, and the anterior tibiofibular ligaments have tightly packed, parallel collagen bundles and thus can resist high tensile forces. The mixed tight and loose, parallel oriented collagenous connective tissue of the inferior extensor retinaculum and the talocalcaneal oblique ligament support the dynamic positioning of the foot on the ground. The interlacing collagen bundles seen at the insertion of the deltoid ligament suggest that these insertion areas are susceptible to tension in a multitude of directions. The morphology and mechanical properties of ankle ligaments may provide an understanding of their response to the loads to which they are subjected. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Finsterbusch, Jürgen
2011-01-01
Experiments with two diffusion weightings applied in direct succession in a single acquisition, so-called double- or two-wave-vector diffusion-weighting (DWV) experiments at short mixing times, have been shown to be a promising tool to estimate cell or compartment sizes, e.g. in living tissue. The basic theory for such experiments predicts that the signal decays for parallel and antiparallel wave vector orientations differ by a factor of three for small wave vectors. This seems to be surprising because in standard, single-wave-vector experiments the polarity of the diffusion weighting has no influence on the signal attenuation. Thus, the question how this difference can be understood more pictorially is often raised. In this rather educational manuscript, the phase evolution during a DWV experiment for simple geometries, e.g. diffusion between parallel, impermeable planes oriented perpendicular to the wave vectors, is considered step-by-step and demonstrates how the signal difference develops. Considering the populations of the phase distributions obtained, the factor of three between the signal decays which is predicted by the theory can be reproduced. Furthermore, the intermediate signal decay for orthogonal wave vector orientations can be derived when investigating diffusion in a box. Thus, the presented “phase gymnastics” approach may help to understand the signal modulation observed in DWV experiments at short mixing times.
ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry
NASA Astrophysics Data System (ADS)
Wohltmann, I.; Rex, M.; Lehmann, R.
2009-04-01
We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.
Loohuis, Anne M M; Wessels, Nienke J; Jellema, Petra; Vermeulen, Karin M; Slieker-Ten Hove, Marijke C; van Gemert-Pijnen, Julia E W C; Berger, Marjolein Y; Dekker, Janny H; Blanker, Marco H
2018-02-02
We aim to assess whether a purpose-developed mobile application (app) is non-inferior regarding effectiveness and cost-effective when used to treat women with urinary incontinence (UI), as compared to care as usual in Dutch primary care. Additionally, we will explore the expectations and experiences of patients and care providers regarding app usage. A mixed-methods study will be performed, combining a pragmatic, randomized-controlled, non-inferiority trial with an extensive process evaluation. Women aged ≥18 years, suffering from UI ≥ 2 times per week and with access to a smartphone or tablet are eligible to participate. The primary outcome will be the change in UI symptom scores at 4 months after randomization, as assessed by the International Consultation on Incontinence Modular Questionnaire UI Short Form. Secondary outcomes will be the change in UI symptom scores at 12 months, as well as the patient-reported global impression of improvement, quality of life, change in sexual functioning, UI episodes per day, and costs at 4 and 12 months. In parallel, we will perform an extensive process evaluation to assess the expectations and experiences of patients and care providers regarding app usage, making use of interviews, focus group sessions, and log data analysis. This study will assess both the effectiveness and cost-effectiveness of app-based treatment for UI. The combination with the process evaluation, which will be performed in parallel, should also give valuable insights into the contextual factors that influence the effectiveness of such a treatment. © 2018 The Authors. Neurourology and Urodynamics Published by Wiley Periodicals, Inc.
GWAS with longitudinal phenotypes: performance of approximate procedures
Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel
2015-01-01
Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupertuis, M.A.; Proctor, M.; Acklin, B.
Energy balance and reciprocity relations are studied for harmonic inhomogeneous plane waves that are incident upon a stack of continuous absorbing dielectric media that are macroscopically characterized by their electric and magnetic permittivities and their conductivities. New cross terms between parallel electric and parallel magnetic modes are identified in the fully generalized Poynting vector. The symmetry and the relations between the general Fresnel coefficients are investigated in the context of energy balance at the interface. The contributions of the so-called mixed Poynting vector are discussed in detail. In particular a new transfer matrix is introduced for energy fluxes in thin-filmmore » optics based on the Poynting and mixed Poynting vectors. Finally, the study of reciprocity relations leads to a generalization of a theorem of reversibility for conducting and dielectric media. 16 refs.« less
Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.
2012-01-01
Objective To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in three high-risk clinical populations. Our theoretically-derived intervention comprised a combination of positive affect and self-affirmation (PA/SA) which we applied to three clinical chronic disease populations. Methods We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM), and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention, and then conducted three randomized controlled trials (RCTs) with parallel study design. Participants were randomized to combined PA/SA vs. an informational control (IC) and followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Results Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The three RCTs enrolled 242 PCI, 258 ASM and 256 HTN participants (n=756). Overall, 45.1% of PA/SA participants versus 33.6% of IC participants achieved successful behavior change (p=0.001). In multivariate analysis PA/SA intervention remained a significant predictor of achieving behavior change (p<0.002, OR=1.66, 95% CI 1.22–2.27), controlling for baseline negative affect, comorbidity, gender, race/ethnicity, medical events, smoking and age. Conclusions The EVOLVE method is a means by which basic behavioral science research can be translated into efficacious interventions for chronic disease populations. PMID:22963594
Alignment between Protostellar Outflows and Filamentary Structure
NASA Astrophysics Data System (ADS)
Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.; Pokhrel, Riwaj; Sadavoy, Sarah I.; Vorobyov, Eduard I.; Tobin, John J.; Pineda, Jaime E.; Offner, Stella S. R.; Lee, Katherine I.; Kristensen, Lars E.; Jørgensen, Jes K.; Goodman, Alyssa A.; Bourke, Tyler L.; Arce, Héctor G.; Plunkett, Adele L.
2017-09-01
We present new Submillimeter Array (SMA) observations of CO(2-1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mix of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ˜3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.
Biodegradation of Metal-EDTA Complexes by an Enriched Microbial Population
Thomas, Russell A. P.; Lawlor, Kirsten; Bailey, Mark; Macaskie, Lynne E.
1998-01-01
A mixed culture utilizing EDTA as the sole carbon source was isolated from a mixed inoculum of water from the River Mersey (United Kingdom) and sludge from an industrial effluent treatment plant. Fourteen component organisms were isolated from the culture, including representatives of the genera Methylobacterium, Variovorax, Enterobacter, Aureobacterium, and Bacillus. The mixed culture biodegraded metal-EDTA complexes slowly; the biodegradability was in the order Fe>Cu>Co>Ni>Cd. By incorporation of inorganic phosphate into the medium as a precipitant ligand, heavy metals were removed in parallel to EDTA degradation. The mixed culture also utilized a number of possible EDTA degradation intermediates as carbon sources. PMID:9546167
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Special Course on Three-Dimensional Supersonic/Hypersonic Flows Including Separation
1990-01-01
STAGE, AIRBREATHING VEHICLE. ALSO SHOWN ON THE FIGURE ARE TWO DATA POINTS FOR THE ACCELERATION OF THE X-15. THE X-15 WAS PROPELLED BY AIR NH3 - 02 ROCKET...FUEL IS INJECTED PARALLEL TO THE FLOW FROM THE BASE OF THE STRUTS AND MIXES AND REACTS SLOWLY WITH THE AIR . AS THE SPEED IS INCREASED, FUEL IS ALSO...INJECTED FROM THE SIDES OF THE STRUTS TO ACHIEVE MORE RAPID MIXING. AT THE HIGHEST SPEEDS, IT IS DESIRABLE TO HAVE THE FUEL AND AIR MIX AND REACT AS
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Chen, P.-S.; Gumaste, U.; Leoinne, M.; Stern, P.
1995-01-01
This research program deals with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a by-pass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by an ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled 3-component problem were developed in 1994. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 has been developed. It is planned to use the steady-state global solution provided by ENG10 as input to a localized three-dimensional FSI analysis for engine regions where aeroelastic effects may be important.
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
AC electroosmotic micromixer for chemical processing in a microchannel.
Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo
2006-04-01
A rapid micromixer of fluids in a microchannel is presented. The mixer uses AC electroosmotic flow, which is induced by applying an AC voltage to a pair of coplanar meandering electrodes configured in parallel to the channel. To demonstrate performance of the mixer, dilution experiments were conducted using a dye solution in a channel of 120 microm width. Rapid mixing was observed for flow velocity up to 12 mm s(-1). The mixing time was 0.18 s, which was 20-fold faster than that of diffusional mixing without an additional mixing mechanism. Compared with the performance of reported micromixers, the present mixer worked with a shorter mixing length, particularly at low Peclet numbers (Pe < 2 x 10(3)).
Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping
2015-01-01
Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253
NASA Astrophysics Data System (ADS)
Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.
2012-01-01
We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
State-independent uncertainty relations and entanglement detection
NASA Astrophysics Data System (ADS)
Qian, Chen; Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of N-dimensional observables. The uncertainty relation for the incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations.
Flow of nanofluid past a Riga plate
NASA Astrophysics Data System (ADS)
Ahmad, Adeel; Asghar, Saleem; Afzal, Sumaira
2016-03-01
This paper studies the mixed convection boundary layer flow of a nanofluid past a vertical Riga plate in the presence of strong suction. The mathematical model incorporates the Brownian motion and thermophoresis effects due to nanofluid and the Grinberg-term for the wall parallel Lorentz force due to Riga plate. The analytical solution of the problem is presented using the perturbation method for small Brownian and thermophoresis diffusion parameters. The numerical solution is also presented to ensure the reliability of the asymptotic method. The comparison of the two solutions shows an excellent agreement. The correlation expressions for skin friction, Nusselt number and Sherwood number are developed by performing linear regression on the obtained numerical data. The effects of nanofluid and the Lorentz force due to Riga plate, on the skin friction are discussed.
Vibrational Dependence of Line Coupling and Line Mixing in Self-Broadened Parallel Bands of NH3
NASA Technical Reports Server (NTRS)
Ma, Q.; Boulet, C.; Tipping, R. H.
2017-01-01
Line coupling and line mixing effects have been calculated for several self-broadened NH3 lines in parallel bands involving an excited v2 mode. It is well known that once the v2 mode is excited, the inversion splitting quickly increases as this quantum number increases. In the present study, we have shown that the v2 dependence of the inversion splitting plays a dominant role in the calculated line-shape parameters. For the v2 band with a 36 cm-1 splitting, the intra-doublet couplings practically disappear and for the 2v2 and 2v2 - v2 bands with much higher splitting values, they are completely absent. With respect to the inter-doublet coupling, it becomes the most efficient coupling mechanism for the v2 band, but it is also completely absent for bands with higher v2 quantum numbers. Because line mixing is caused by line coupling, the above conclusions on line coupling are also applicable for line mixing. Concerning the check of our calculated line mixing effects, while the present formalism has well explained the line mixing signatures observed in the v1 band, there are large discrepancies between the measured Rosenkranz mixing parameters and our calculated results for the v2 and 2v2 bands. In order to clarify these discrepancies, we propose to make some new measurements. In addition, we have calculated self-broadened half-widths in the v2 and 2v2 bands and made comparisons with several measurements and with the values listed in HITRAN 2012. In general, the agreements with measurements are very good. In contrast, the agreement with HITRAN 2012 is poor, indicating that the empirical formula used to predict the HITRAN 2012 data has to be updated.
Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems
NASA Astrophysics Data System (ADS)
Endo, Eishin; Toga, Yuta; Sasaki, Munetaka
2015-07-01
We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.
Christensen, Nanna K; Bryld, Torsten; Sørensen, Mads D; Arar, Khalil; Wengel, Jesper; Nielsen, Poul
2004-02-07
Two LNA (locked nucleic acid) stereoisomers (beta-L-LNA and alpha-D-LNA) are evaluated in the mirror-image world, that is by the study of two mixed sequences of LNA and alpha-L-LNA and their L-DNA and L-RNA complements. Both are found to display high-affinity RNA-recognition by the formation of duplexes with parallel strand orientation.
High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.
1996-01-01
This research program dealt with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in January 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled three-component problem were developed during 1994 and 1995. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor tor parallel versions of ENG10 was developed. During 1995 and 1996 we developed the capability tor the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames. Benchmark results were presented at the 1196 Computational Aeroscience meeting.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Fox, Don T.; Guo, Luanjing; Fujita, Yoshiko; ...
2015-12-17
Formation of mineral precipitates in the mixing interface between two reactant solutions flowing in parallel in porous media is governed by reactant mixing by diffusion and dispersion and is coupled to changes in porosity/permeability due to precipitation. The spatial and temporal distribution of mixing-dependent precipitation of barium sulfate in porous media was investigated with side-by-side injection of barium chloride and sodium sulfate solutions in thin rectangular flow cells packed with quartz sand. The results for homogeneous sand beds were compared to beds with higher or lower permeability inclusions positioned in the path of the mixing zone. In the homogeneous andmore » high permeability inclusion experiments, BaSO 4 precipitate (barite) formed in a narrow deposit along the length and in the center of the solution–solution mixing zone even though dispersion was enhanced within, and downstream of, the high permeability inclusion. In the low permeability inclusion experiment, the deflected BaSO 4 precipitation zone broadened around one side and downstream of the inclusion and was observed to migrate laterally toward the sulfate solution. A continuum-scale fully coupled reactive transport model that simultaneously solves the nonlinear governing equations for fluid flow, transport of reactants and geochemical reactions was used to simulate the experiments and provide insight into mechanisms underlying the experimental observations. Lastly, migration of the precipitation zone in the low permeability inclusion experiment could be explained by the coupling effects among fluid flow, reactant transport and localized mineral precipitation reaction.« less
Peterson, Janey C; Czajkowski, Susan; Charlson, Mary E; Link, Alissa R; Wells, Martin T; Isen, Alice M; Mancuso, Carol A; Allegrante, John P; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B
2013-04-01
To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease populations. We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM) and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention and then conducted 3 randomized controlled trials with parallel study design. Participants were randomized to combined PA/SA versus an informational control and were followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The 3 randomized controlled trials enrolled 242 participants who had undergone PCI, 258 with ASM, and 256 with HTN (n = 756). Overall, 45.1% of PA/SA participants versus 33.6% of informational control participants achieved successful behavior change (p = .001). In multivariate analysis, PA/SA intervention remained a significant predictor of achieving behavior change (p < .002, odds ratio = 1.66), 95% CI [1.22, 2.27], controlling for baseline negative affect, comorbidity, gender, race/ethnicity, medical events, smoking, and age. The EVOLVE method is a means by which basic behavioral science research can be translated into efficacious interventions for chronic disease populations.
NASA Astrophysics Data System (ADS)
Horochowska, Martyna; Cieślik-Boczula, Katarzyna; Rospenk, Maria
2018-03-01
It has been shown that Prodan emission-excitation fluorescence spectroscopy supported by Parallel Factor (PARAFAC) analysis is a fast, simple and sensitive method used in the study of the phase transition from the noninterdigitated gel (Lβ‧) state to the interdigitated gel (LβI) phase, triggered by ethanol and 2,2,2-trifluoroethanol (TFE) molecules in dipalmitoylphosphatidylcholines (DPPC) membranes. The relative contribution of lipid phases with spectral characteristics of each pure phase component has been presented as a function of an increase in alcohol concentration. It has been stated that both alcohol molecules can induce a formation of the LβI phase, but TFE is over six times stronger inducer of the interdigitated phase in DPPC membranes than ethanol molecules. Moreover, in the TFE-mixed DPPC membranes, the transition from the Lβ‧ to LβI phase is accompanied by a formation of the fluid phase, which most probably serves as a boundary phase between the Lβ‧ and LβI regions. Contrary to the three phase-state model of TFE-mixed DPPC membranes, in ethanol-mixed DPPC membranes only the two phase-state model has been detected.
Subgrid Combustion Modeling for the Next Generation National Combustion Code
NASA Technical Reports Server (NTRS)
Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher
2003-01-01
In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.
Investigating the management of diabetes in nursing homes using a mixed methods approach.
Hurley, L; O'Donnell, M; O'Caoimh, R; Dinneen, S F
2017-05-01
As populations age there is an increased demand for nursing home (NH) care and a parallel increase in the prevalence of diabetes. Despite this, there is growing evidence that the management of diabetes in NHs is suboptimal. The reasons for this are complex and poorly understood. This study aimed to identify the current level of diabetes care in NHs using a mixed methods approach. The nursing managers at all 44 NHs in County Galway in the West of Ireland were invited to participate. A mixed methods approach involved a postal survey, focus group and telephone interviews. The survey response rate was 75% (33/44) and 27% (9/33) of nursing managers participated in the qualitative research. The reported prevalence of diagnosed diabetes was 14% with 80% of NHs treating residents with insulin. Hypoglycaemia was reported as 'frequent' in 19% of NHs. A total of 36% of NHs have staff who have received diabetes education or training and 56% have access to diabetes care guidelines. Staff education was the most cited opportunity for improving diabetes care. Focus group and interview findings highlight variations in the level of support provided by GPs and access to dietetic, podiatry and retinal screening services. There is a need for national clinical guidelines and standards of care for diabetes management in nursing homes, improved access to quality diabetes education for NH staff, and greater integration between healthcare services and NHs to ensure equity, continuity and quality in diabetes care delivery. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.
Huang, N H; Kagel, J R; Rossi, D T
1999-03-01
An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.
Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters
NASA Astrophysics Data System (ADS)
Esler, Kenneth
2011-03-01
Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Experimental investigations of the wettability of clays and shales
NASA Astrophysics Data System (ADS)
Borysenko, Artem; Clennell, Ben; Sedev, Rossen; Burgar, Iko; Ralston, John; Raven, Mark; Dewhurst, David; Liu, Keyu
2009-07-01
Wettability in argillaceous materials is poorly understood, yet it is critical to hydrocarbon recovery in clay-rich reservoirs and capillary seal capacity in both caprocks and fault gouges. The hydrophobic or hydrophilic nature of clay-bearing soils and sediments also controls to a large degree the movement of spilled nonaqueous phase liquids in the subsurface and the options available for remediation of these pollutants. In this paper the wettability of hydrocarbons contacting shales in their natural state and the tendencies for wettability alteration were examined. Water-wet, oil-wet, and mixed-wet shales from wells in Australia were investigated and were compared with simplified model shales (single and mixed minerals) artificially treated in crude oil. The intact natural shale samples (preserved with their original water content) were characterized petrophysically by dielectric spectroscopy and nuclear magnetic resonance, plus scanning electron, optical and fluorescence microscopy. Wettability alteration was studied using spontaneous imbibition, pigment extraction, and the sessile drop method for contact angle measurement. The mineralogy and chemical compositions of the shales were determined by standard methods. By studying pure minerals and natural shales in parallel, a correlation between the petrophysical properties, and wetting behavior was observed. These correlations may potentially be used to assess wettability in downhole measurements.
Experimental and CFD evidence of multiple solutions in a naturally ventilated building.
Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z
2004-02-01
This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.
Golembiewski, Elizabeth; Watson, Dennis P.; Robison, Lisa; Coberg, John W.
2017-01-01
The positive relationship between social support and mental health has been well documented, but individuals experiencing chronic homelessness face serious disruptions to their social networks. Housing First (HF) programming has been shown to improve health and stability of formerly chronically homeless individuals. However, researchers are only just starting to understand the impact HF has on residents’ individual social integration. The purpose of the current study was to describe and understand changes in social networks of residents living in a HF program. Researchers employed a longitudinal, convergent parallel mixed method design, collecting quantitative social network data through structured interviews (n = 13) and qualitative data through semi-structured interviews (n = 20). Quantitative results demonstrated a reduction in network size over the course of one year. However, increases in both network density and frequency of contact with network members increased. Qualitative interviews demonstrated a strengthening in the quality of relationships with family and housing providers and a shedding of burdensome and abusive relationships. These results suggest network decay is a possible indicator of participants’ recovery process as they discontinued negative relationships and strengthened positive ones. PMID:28890807
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Kintner, Jr., Paul M. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor)
2007-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor); Kintner, Jr., Paul M. (Inventor)
2006-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
NASA Astrophysics Data System (ADS)
Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef
2016-12-01
The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.
Parallel algorithm of VLBI software correlator under multiprocessor environment
NASA Astrophysics Data System (ADS)
Zheng, Weimin; Zhang, Dong
2007-11-01
The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Biphoton Generation Driven by Spatial Light Modulation: Parallel-to-Series Conversion
NASA Astrophysics Data System (ADS)
Zhao, Luwei; Guo, Xianxin; Sun, Yuan; Su, Yumian; Loy, M. M. T.; Du, Shengwang
2016-05-01
We demonstrate the generation of narrowband biphotons with controllable temporal waveform by spontaneous four-wave mixing in cold atoms. In the group-delay regime, we study the dependence of the biphoton temporal waveform on the spatial profile of the pump laser beam. By using a spatial light modulator, we manipulate the spatial profile of the pump laser and map it onto the two-photon entangled temporal wave function. This parallel-to-series conversion (or spatial-to-temporal mapping) enables coding the parallel classical information of the pump spatial profile to the sequential temporal waveform of the biphoton quantum state. The work was supported by the Hong Kong RGC (Project No. 601113).
A CS1 pedagogical approach to parallel thinking
NASA Astrophysics Data System (ADS)
Rague, Brian William
Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.
Jones, Ryan J. R.; Shinde, Aniketa; Guevarra, Dan; ...
2015-01-05
There are many energy technologies require electrochemical stability or preactivation of functional materials. Due to the long experiment duration required for either electrochemical preactivation or evaluation of operational stability, parallel screening is required to enable high throughput experimentation. We found that imposing operational electrochemical conditions to a library of materials in parallel creates several opportunities for experimental artifacts. We discuss the electrochemical engineering principles and operational parameters that mitigate artifacts int he parallel electrochemical treatment system. We also demonstrate the effects of resistive losses within the planar working electrode through a combination of finite element modeling and illustrative experiments. Operationmore » of the parallel-plate, membrane-separated electrochemical treatment system is demonstrated by exposing a composition library of mixed metal oxides to oxygen evolution conditions in 1M sulfuric acid for 2h. This application is particularly important because the electrolysis and photoelectrolysis of water are promising future energy technologies inhibited by the lack of highly active, acid-stable catalysts containing only earth abundant elements.« less
NASA Technical Reports Server (NTRS)
Ergun, R. E.; Holmes, J. C.; Goodrich, K. A.; Wilder, F. D.; Stawarz, J. E.; Eriksson, S.; Newman, D. L.; Schwartz, S. J.; Goldman, M. V.; Sturner, A. P.;
2016-01-01
We report observations from the Magnetospheric Multiscale satellites of large-amplitude, parallel, electrostatic waves associated with magnetic reconnection at the Earth's magnetopause. The observed waves have parallel electric fields (E(sub parallel)) with amplitudes on the order of 100 mV/m and display nonlinear characteristics that suggest a possible net E(sub parallel). These waves are observed within the ion diffusion region and adjacent to (within several electron skin depths) the electron diffusion region. They are in or near the magnetosphere side current layer. Simulation results support that the strong electrostatic linear and nonlinear wave activities appear to be driven by a two stream instability, which is a consequence of mixing cold (less than 10eV) plasma in the magnetosphere with warm (approximately 100eV) plasma from the magnetosheath on a freshly reconnected magnetic field line. The frequent observation of these waves suggests that cold plasma is often present near the magnetopause.
Accurate secondary structure prediction and fold recognition for circular dichroism spectroscopy
Micsonai, András; Wien, Frank; Kernya, Linda; Lee, Young-Ho; Goto, Yuji; Réfrégiers, Matthieu; Kardos, József
2015-01-01
Circular dichroism (CD) spectroscopy is a widely used technique for the study of protein structure. Numerous algorithms have been developed for the estimation of the secondary structure composition from the CD spectra. These methods often fail to provide acceptable results on α/β-mixed or β-structure–rich proteins. The problem arises from the spectral diversity of β-structures, which has hitherto been considered as an intrinsic limitation of the technique. The predictions are less reliable for proteins of unusual β-structures such as membrane proteins, protein aggregates, and amyloid fibrils. Here, we show that the parallel/antiparallel orientation and the twisting of the β-sheets account for the observed spectral diversity. We have developed a method called β-structure selection (BeStSel) for the secondary structure estimation that takes into account the twist of β-structures. This method can reliably distinguish parallel and antiparallel β-sheets and accurately estimates the secondary structure for a broad range of proteins. Moreover, the secondary structure components applied by the method are characteristic to the protein fold, and thus the fold can be predicted to the level of topology in the CATH classification from a single CD spectrum. By constructing a web server, we offer a general tool for a quick and reliable structure analysis using conventional CD or synchrotron radiation CD (SRCD) spectroscopy for the protein science research community. The method is especially useful when X-ray or NMR techniques fail. Using BeStSel on data collected by SRCD spectroscopy, we investigated the structure of amyloid fibrils of various disease-related proteins and peptides. PMID:26038575
Statistical method to compare massive parallel sequencing pipelines.
Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P
2017-03-01
Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
Annular fuel and air co-flow premixer
Stevenson, Christian Xavier; Melton, Patrick Benedict; York, William David
2013-10-15
Disclosed is a premixer for a combustor including an annular outer shell and an annular inner shell. The inner shell defines an inner flow channel inside of the inner shell and is located to define an outer flow channel between the outer shell and the inner shell. A fuel discharge annulus is located between the outer flow channel and the inner flow channel and is configured to inject a fuel flow into a mixing area in a direction substantially parallel to an outer airflow through the outer flow channel and an inner flow through the inner flow channel. Further disclosed are a combustor including a plurality of premixers and a method of premixing air and fuel in a combustor.
2011-01-01
Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population. PMID:21888678
Lean direct injection diffusion tip and related method
Varatharajan, Balachandar [Cincinnati, OH; Ziminsky, Willy S [Simpsonville, SC; Lipinski, John [Simpsonville, SC; Kraemer, Gilbert O [Greer, SC; Yilmaz, Ertan [Niskayuna, NY; Lacy, Benjamin [Greer, SC
2012-08-14
A nozzle for a gas turbine combustor includes a first radially outer tube defining a first passage having an inlet and an outlet, the inlet adapted to supply air to a reaction zone of the combustor. A center body is located within the first radially outer tube, the center body including a second radially intermediate tube for supplying fuel to the reaction zone and a third radially inner tube for supplying air to the reaction zone. The second intermediate tube has a first outlet end closed by a first end wall that is formed with a plurality of substantially parallel, axially-oriented air outlet passages for the additional air in the third radially inner tube, each air outlet passage having a respective plurality of associated fuel outlet passages in the first end wall for the fuel in the second radially intermediate tube. The respective plurality of associated fuel outlet passages have non-parallel center axes that intersect a center axis of the respective air outlet passage to locally mix fuel and air exiting said center body.
NASA Astrophysics Data System (ADS)
Lagus, Todd P.; Edd, Jon F.
2013-03-01
Most cell biology experiments are performed in bulk cell suspensions where cell secretions become diluted and mixed in a contiguous sample. Confinement of single cells to small, picoliter-sized droplets within a continuous phase of oil provides chemical isolation of each cell, creating individual microreactors where rare cell qualities are highlighted and otherwise undetectable signals can be concentrated to measurable levels. Recent work in microfluidics has yielded methods for the encapsulation of cells in aqueous droplets and hydrogels at kilohertz rates, creating the potential for millions of parallel single-cell experiments. However, commercial applications of high-throughput microdroplet generation and downstream sensing and actuation methods are still emerging for cells. Using fluorescence-activated cell sorting (FACS) as a benchmark for commercially available high-throughput screening, this focused review discusses the fluid physics of droplet formation, methods for cell encapsulation in liquids and hydrogels, sensors and actuators and notable biological applications of high-throughput single-cell droplet microfluidics.
Guo, Wei-Dong; Huang, Jian-Ping; Hong, Hua-Sheng; Xu, Jing; Deng, Xun
2010-06-01
The distribution and estuarine behavior of fluorescent components of chromophoric dissolved organic matter (CDOM) from Jiulong Estuary were determined by fluorescence excitation emission matrix spectroscopy (EEMs) combined with parallel factor analysis (PARAFAC). The feasibility of these components as tracers for organic pollution in estuarine environments was also evaluated. Four separate fluorescent components were identified by PARAFAC, including three humic-like components (C1: 240, 310/382 nm; C2: 230, 250, 340/422 nm; C4: 260, 390/482 nm) and one protein-like components (C3: 225, 275/342 nm). These results indicated that UV humic-like peak A area designated by traditional "peak-picking method" was not a single peak but actually a combination of several fluorescent components, and it also had inherent links to so-called marine humic-like peak M or terrestrial humic-like peak C. Component C2 which include peak M decreased with increase of salinity in Jiulong Estuary, demonstrating that peak M can not be thought as the specific indicator of the "marine" humic-like component. Two humic-like components C1 and C2 showed additional behavior in the turbidity maximum region (salinity < 6) and then conservative mixing behavior for the rest estuarine region, while humic-like components C4 showed conservative mixing behavior for the whole estuarine region. However, the protein-like component C3 showed nonconservative mixing behavior, suggesting it had autochthonous estuarine origin. EEMs-PARAFAC can provide fluorescent fingerprint to differentiate the DOM features for three tributaries of Jiulong River. The observed linear relationships between humic-like components and absorption coefficient a (280) with chemical oxygen demand (COD) and biological oxygen demand (BOD5) suggest that the optical properties of CDOM may provide a fast in-situ way to monitor the variation of the degree of organic pollution in estuarine environments.
Enhanced electron mixing and heating in 3-D asymmetric reconnection at the Earth's magnetopause
Le, Ari Yitzchak; Daughton, William Scott; Chen, Li -Jen; ...
2017-03-01
Here, electron heating and mixing during asymmetric reconnection are studied with a 3-D kinetic simulation that matches plasma parameters from Magnetospheric Multiscale (MMS) spacecraft observations of a magnetopause diffusion region. The mixing and heating are strongly enhanced across the magnetospheric separatrix compared to a 2-D simulation. The transport of particles across the separatrix in 3-D is attributed to lower hybrid drift turbulence excited at the steep density gradient near the magnetopause. In the 3-D simulation (and not the 2-D simulation), the electron temperature parallel to the magnetic field within the mixing layer is significantly higher than its upstream value inmore » agreement with the MMS observations.« less
ERIC Educational Resources Information Center
Duarte, Robert; Nielson, Janne T.; Dragojlovic, Veljko
2004-01-01
A group of techniques aimed at synthesizing a large number of structurally diverse compounds is called combinatorial synthesis. Synthesis of chemiluminescence esters using parallel combinatorial synthesis and mix-and-split combinatorial synthesis is experimented.
Parallelization of the FLAPW method and comparison with the PPW method
NASA Astrophysics Data System (ADS)
Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur
2000-03-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.
Massively parallel sequencing-enabled mixture analysis of mitochondrial DNA samples.
Churchill, Jennifer D; Stoljarova, Monika; King, Jonathan L; Budowle, Bruce
2018-02-22
The mitochondrial genome has a number of characteristics that provide useful information to forensic investigations. Massively parallel sequencing (MPS) technologies offer improvements to the quantitative analysis of the mitochondrial genome, specifically the interpretation of mixed mitochondrial samples. Two-person mixtures with nuclear DNA ratios of 1:1, 5:1, 10:1, and 20:1 of individuals from different and similar phylogenetic backgrounds and three-person mixtures with nuclear DNA ratios of 1:1:1 and 5:1:1 were prepared using the Precision ID mtDNA Whole Genome Panel and Ion Chef, and sequenced on the Ion PGM or Ion S5 sequencer (Thermo Fisher Scientific, Waltham, MA, USA). These data were used to evaluate whether and to what degree MPS mixtures could be deconvolved. Analysis was effective in identifying the major contributor in each instance, while SNPs from the minor contributor's haplotype only were identified in the 1:1, 5:1, and 10:1 two-person mixtures. While the major contributor was identified from the 5:1:1 mixture, analysis of the three-person mixtures was more complex, and the mixed haplotypes could not be completely parsed. These results indicate that mixed mitochondrial DNA samples may be interpreted with the use of MPS technologies.
Dynamic file-access characteristics of a production parallel scientific workload
NASA Technical Reports Server (NTRS)
Kotz, David; Nieuwejaar, Nils
1994-01-01
Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
Rapid mix concepts for low emission combustors in gas turbine engines
NASA Technical Reports Server (NTRS)
Talpallikar, Milind V.; Smith, Clifford E.; Lai, Ming-Chia
1990-01-01
NASA LeRC has identified the Rich burn/Quick mix/Lean burn (RQL) combustor as a potential gas turbine combustor concept to reduce NOx emissions in High Speed Civil Transport (HSCT) aircraft. To demonstrate reduced NOx levels, NASA LeRC soon will test a flametube version of an RQL combustor. The critical technology needed for the RQL combustor is a method of quickly mixing combustion air with rich burn gases. Two concepts were proposed to enhance jet mixing in a circular cross-section: the Asymmetric Jet Penetration (AJP) concept; and the Lobed Mixer (LM) concept. In Phase 1, two preliminary configurations of the AJP concept were compared with a conventional 12-jet radial-inflow slot design. The configurations were screened using an advanced 3-D Computational Fluid Dynamics (CFD) code named REFLEQS. Both non-reacting and reacting analyses were performed. For an objective comparison, the conventional design was optimized by parametric variation of the jet-to-mainstream momentum flux (J) ratio. The optimum J was then employed in the AJP simulations. Results showed that the three-jet AJP configuration was superior in overall mixedness compared to the conventional design. However, in regards to NOx emissions, the AJP configuration was inferior. The higher emission level for AJP was caused by a single hot spot located in the wake of the central jet as it entered the combustor. Ways of maintaining good mixedness while eliminating the hot spot were identified for Phase 2 study. Overall, Phase 1 showed the viability of using CFD analyses to evaluate quick-mix concepts. A high probability exists that advancing mixing concepts will reduce NOx emissions in RQL combustors, and should be explored in Phase 2, by parallel numerical and experimental work.
Re-Visiting the Electronic Energy Map of the Copper Dimer by Double-Resonant Four-Wave Mixing
NASA Astrophysics Data System (ADS)
Visser, Bradley; Bornhauser, Peter; Beck, Martin; Knopp, Gregor; Marquardt, Roberto; Gourlaouen, Christophe; van Bokhoven, Jeroen A.; Radi, Peter
2017-06-01
The copper dimer is one of the most studied transition metal (TM) diatomics due to its alkali-metal like electronic shell structure, strongly bound ground state and chemical reactivity. The high electronic promotion energy in the copper atom yields numerous low-lying electronic states compared to TM dimers with d)-hole electronic configurations. Thus, through extensive study the excited electronic structure of Cu_2 is relatively well known, however in practice few excited states have been investigated with rotational resolution or even assigned term symbols or dissociation limits. The spectroscopic methods that have been used to investigate the copper dimer until now have not possessed sufficient spectral selectivity, which has complicated the analysis of the often overlapping transitions. Resonant four-wave mixing is a non-linear absorption based spectroscopic method. In favorable cases, the two-color version (TC-RFWM) enables purely optical mass selective spectral measurements in a mixed molecular beam. Additionally, by labelling individual rotational levels in the common intermediate state the spectra are dramatically simplified. In this work, we report on the rotationally resolved characterization of low-lying electronic states of dicopper. Several term symbols have been assigned unambiguously. De-perturbation studies performed shed light on the complex electronic structure of the molecule. Furthermore, a new low-lying electronic state of Cu_2 is discovered and has important implications for the high-level theoretical structure calculations performed in parallel. In fact, the ab initio methods applied yield relative energies among the electronic levels that are almost quantitative and allow assignment of the newly observed state that is governed by spin-orbit interacting levels.
Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick
2017-03-09
Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
Strength, Deformation and Friction of in situ Rock
1974-12-01
Kayenta sandstone, Mixed Company site, Colorado. 30 21. Strength as a function of density for specimen cored perpendicular and parallel to bedding. 30...saturation. 33 24. Photomicrograph of Kayenta sandstone (x 30). 35 25. Stress difference as a function of density for triaxial tests up to P = 4.0...specimen size on strength for Kayenta sandstone, Mixed Company site Colorado. m Sä £ 3 s Q 3/« In, j. O 2 In. X ’ X3/4(n.ll • 2ln. II it
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
Parallelization of the FLAPW method
NASA Astrophysics Data System (ADS)
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
CSM parallel structural methods research
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1989-01-01
Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.
2014-01-01
Background Policymakers and researchers seek answers to how liberalized drug policies affect people who inject drugs (PWID). In response to concerns about the failing “war on drugs,” Mexico recently implemented drug policy reforms that partially decriminalized possession of small amounts of drugs for personal use while promoting drug treatment. Recognizing important epidemiologic, policy, and socioeconomic differences between the United States—where possession of any psychoactive drugs without a prescription remains illegal—and Mexico—where possession of small quantities for personal use was partially decriminalized, we sought to assess changes over time in knowledge, attitudes, behaviors, and infectious disease profiles among PWID in the adjacent border cities of San Diego, CA, USA, and Tijuana, Baja California, Mexico. Methods Based on extensive binational experience and collaboration, from 2012–2014 we initiated two parallel, prospective, mixed methods studies: Proyecto El Cuete IV in Tijuana (n = 785) and the STAHR II Study in San Diego (n = 575). Methods for sampling, recruitment, and data collection were designed to be compatible in both studies. All participants completed quantitative behavioral and geographic assessments and serological testing (HIV in both studies; hepatitis C virus and tuberculosis in STAHR II) at baseline and four semi-annual follow-up visits. Between follow-up assessment visits, subsets of participants completed qualitative interviews to explore contextual factors relating to study aims and other emergent phenomena. Planned analyses include descriptive and inferential statistics for quantitative data, content analysis and other mixed-methods approaches for qualitative data, and phylogenetic analysis of HIV-positive samples to understand cross-border transmission dynamics. Results Investigators and research staff shared preliminary findings across studies to provide feedback on instruments and insights regarding local phenomena. As a result, recruitment and data collection procedures have been implemented successfully, demonstrating the importance of binational collaboration in evaluating the impact of structural-level drug policy reforms on the behaviors, health, and wellbeing of PWID across an international border. Conclusions Our prospective, mixed methods approach allows each study to be responsive to emerging phenomena within local contexts while regular collaboration promotes sharing insights across studies. The strengths and limitations of this approach may serve as a guide for other evaluations of harm reduction policies internationally. PMID:24520885
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
High-Performance Reactive Particle Tracking with Adaptive Representation
NASA Astrophysics Data System (ADS)
Schmidt, M.; Benson, D. A.; Pankavich, S.
2017-12-01
Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.
Retail Food Store Access in Rural Appalachia: A Mixed Methods Study.
Thatcher, Esther; Johnson, Cassandra; Zenk, Shannon N; Kulbok, Pamela
2017-05-01
To describe how characteristics of food retail stores (potential access) and other factors influence self-reported food shopping behavior (realized food access) among low-income, rural Central Appalachian women. Cross-sectional descriptive. Potential access was assessed through store mapping and in-store food audits. Factors influencing consumers' realized access were assessed through in-depth interviews. Results were merged using a convergent parallel mixed methods approach. Food stores (n = 50) and adult women (n = 9) in a rural Central Appalachian county. Potential and realized food access were described across five dimensions: availability, accessibility, affordability, acceptability, and accommodation. Supermarkets had better availability of healthful foods, followed by grocery stores, dollar stores, and convenience stores. On average, participants lived within 10 miles of 3.9 supermarkets or grocery stores, and traveled 7.5 miles for major food shopping. Participants generally shopped at the closest store that met their expectations for food availability, price, service, and atmosphere. Participants' perceptions of stores diverged from each other and from in-store audit findings. Findings from this study can help public health nurses engage with communities to make affordable, healthy foods more accessible. Recommendations are made for educating low-income consumers and partnering with food stores. © 2016 Wiley Periodicals, Inc.
Stewart, Jennifer M
2014-01-01
To assess the barriers and facilitators to using African American churches as sites for implementation of evidence-based HIV interventions among young African American women. Mixed methods cross-sectional design. African American churches in Philadelphia, PA. 142 African American pastors, church leaders, and young adult women ages 18 to 25. Mixed methods convergent parallel design. The majority of young adult women reported engaging in high-risk HIV-related behaviors. Although church leaders reported willingness to implement HIV risk-reduction interventions, they were unsure of how to initiate this process. Key facilitators to the implementation of evidence-based interventions included the perception of the leadership and church members that HIV interventions were needed and that the church was a promising venue for them. A primary barrier to implementation in this setting is the perception that discussions of sexuality should be private. Implementation of evidence-based HIV interventions for young adult African American women in church settings is feasible and needed. Building a level of comfort in discussing matters of sexuality and adapting existing evidence-based interventions to meet the needs of young women in church settings is a viable approach for successful implementation. © 2014 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.
Dimensions of Posttraumatic Growth in Patients With Cancer: A Mixed Method Study.
Heidarzadeh, Mehdi; Rassouli, Maryam; Brant, Jeannine M; Mohammadi-Shahbolaghi, Farahnaz; Alavi-Majd, Hamid
2017-08-12
Posttraumatic growth (PTG) refers to positive outcomes after exposure to stressful events. Previous studies suggest cross-cultural differences in the nature and amount of PTG. The aim of this study was to explore different dimensions of PTG in Iranian patients with cancer. A mixed method study with convergent parallel design was applied to clarify and determine dimensions of PTG. Using the Posttraumatic Growth Inventory (PTGI), confirmatory factor analysis was used to quantitatively identify dimensions of PTG in 402 patients with cancer. Simultaneously, phenomenological methodology (in-depth interview with 12 patients) was used to describe and interpret the lived experiences of cancer patients in the qualitative part of the study. Five dimensions of PTGI were confirmed from the original PTGI. Qualitatively, new dimensions of PTG emerged including "inner peace and other positive personal attributes," "finding meaning of life," "being a role model," and "performing health promoting behaviors." Results of the study indicated that PTG is a 5-dimensional concept with a broad range of subthemes for Iranian cancer patients and that the PTGI did not reflect all growth dimensions in Iranian cancer patients. Awareness of PTG dimensions can enable nurses to guide their use as coping strategies and provide context for positive changes in patients to promote quality care.
Probing fast ribozyme reactions under biological conditions with rapid quench-flow kinetics.
Bingaman, Jamie L; Messina, Kyle J; Bevilacqua, Philip C
2017-05-01
Reaction kinetics on the millisecond timescale pervade the protein and RNA fields. To study such reactions, investigators often perturb the system with abiological solution conditions or substrates in order to slow the rate to timescales accessible by hand mixing; however, such perturbations can change the rate-limiting step and obscure key folding and chemical steps that are found under biological conditions. Mechanical methods for collecting data on the millisecond timescale, which allow these perturbations to be avoided, have been developed over the last few decades. These methods are relatively simple and can be conducted on affordable and commercially available instruments. Here, we focus on using the rapid quench-flow technique to study the fast reaction kinetics of RNA enzymes, or ribozymes, which often react on the millisecond timescale under biological conditions. Rapid quench of ribozymes is completely parallel to the familiar hand-mixing approach, including the use of radiolabeled RNAs and fractionation of reactions on polyacrylamide gels. We provide tips on addressing and preventing common problems that can arise with the rapid-quench technique. Guidance is also offered on ensuring the ribozyme is properly folded and fast-reacting. We hope that this article will facilitate the broader use of rapid-quench instrumentation to study fast-reacting ribozymes under biological reaction conditions. Copyright © 2017 Elsevier Inc. All rights reserved.
Probing fast ribozyme reactions under biological conditions with rapid quench-flow kinetics
Bingaman, Jamie L.; Messina, Kyle J.; Bevilacqua, Philip C.
2017-01-01
Reaction kinetics on the millisecond timescale pervade the protein and RNA fields. To study such reactions, investigators often perturb the system with abiological solution conditions or substrates in order to slow the rate to timescales accessible by hand-mixing; however, such perturbations can change the rate-limiting step and obscure key folding and chemical steps that are found under biological conditions. Mechanical methods for collecting data on the millisecond timescale, which allow these perturbations to be avoided, have been developed over the last few decades. These methods are relatively simple and can be conducted on affordable and commercially available instruments. Here, we focus on using the rapid quench-flow technique to study the fast reaction kinetics of RNA enzymes, or ribozymes, which often react on the millisecond timescale under biological conditions. Rapid quench of ribozymes is completely parallel to the familiar hand-mixing approach, including the use of radiolabeled RNAs and fractionation of reactions on polyacrylamide gels. We provide tips on addressing and preventing common problems that can arise with the rapid-quench technique. Guidance is also offered on ensuring the ribozyme is properly folded and fast-reacting. We hope that this article will facilitate the broader use of rapid-quench instrumentation to study fast-reacting ribozymes under biological reaction conditions. PMID:28315484
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D
2014-01-01
The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Detrixhe, Miles; Gibou, Frédéric
2016-10-01
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.
Song, Hongjun; Cai, Ziliang; Noh, Hongseok Moses; Bennett, Dawn J
2010-03-21
In this paper we present a numerical and experimental investigation of a chaotic mixer in a microchannel via low frequency switching transverse electroosmotic flow. By applying a low frequency, square-wave electric field to a pair of parallel electrodes placed at the bottom of the channel, a complex 3D spatial and time-dependence flow was generated to stretch and fold the fluid. This significantly enhanced the mixing effect. The mixing mechanism was first investigated by numerical and experimental analysis. The effects of operational parameters such as flow rate, frequency, and amplitude of the applied voltage have also been investigated. It is found that the best mixing performance is achieved when the frequency is around 1 Hz, and the required mixing length is about 1.5 mm for the case of applied electric potential 5 V peak-to-peak and flow rate 75 microL h(-1). The mixing performance was significantly enhanced when the applied electric potential increased or the flow rate of fluids decreased.
EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2004-01-01
EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.
A mixed method pilot study: the researchers' experiences.
Secomb, Jacinta M; Smith, Colleen
2011-08-01
This paper reports on the outcomes of a small well designed pilot study. Pilot studies often disseminate limited or statistically meaningless results without adding to the body knowledge on the comparative research benefits. The design a pre-test post-test group parallel randomised control trial and inductive content analysis of focus group transcripts was tested specifically to increase outcomes in a proposed larger study. Strategies are now in place to overcome operational barriers and recruitment difficulties. Links between the qualitative and quantitative arms of the proposed larger study have been made; it is anticipated that this will add depth to the final report. More extensive reporting on the outcomes of pilot studies would assist researchers and increase the body of knowledge in this area.
NASA Technical Reports Server (NTRS)
Hirsh, R. S.
1976-01-01
A numerical method is presented for solving the parabolic-elliptic Navier-Stokes equations. The solution procedure is applied to three-dimensional supersonic laminar jet flow issuing parallel with a supersonic free stream. A coordinate transformation is introduced which maps the boundaries at infinity into a finite computational domain in order to eliminate difficulties associated with the imposition of free-stream boundary conditions. Results are presented for an approximate circular jet, a square jet, varying aspect ratio rectangular jets, and interacting square jets. The solution behavior varies from axisymmetric to nearly two-dimensional in character. For cases where comparisons of the present results with those obtained from shear layer calculations could be made, agreement was good.
Exploratory tests of two strut fuel injectors for supersonic combustion
NASA Technical Reports Server (NTRS)
Anderson, G. Y.; Gooderum, P. B.
1974-01-01
Results of supersonic mixing and combustion tests performed with two simple strut injector configurations, one with parallel injectors and one with perpendicular injectors, are presented and analyzed. Good agreement is obtained between static pressure measured on the duct wall downstream of the strut injectors and distributions obtained from one-dimensional calculations. Measured duct heat load agrees with results of the one-dimensional calculations for moderate amounts of reaction, but is underestimated when large separated regions occur near the injection location. For the parallel injection strut, good agreement is obtained between the shape of the injected fuel distribution inferred from gas sample measurements at the duct exit and the distribution calculated with a multiple-jet mixing theory. The overall fraction of injected fuel reacted in the multiple-jet calculation closely matches the amount of fuel reaction necessary to match static pressure with the one-dimensional calculation. Gas sample measurements with the perpendicular injection strut also give results consistent with the amount of fuel reaction in the one-dimensional calculation.
Embodied information behavior, mixed reality and big data
NASA Astrophysics Data System (ADS)
West, Ruth; Parola, Max J.; Jaycen, Amelia R.; Lueg, Christopher P.
2015-03-01
A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.
Mitochondrial DNA heteroplasmy in the emerging field of massively parallel sequencing
Just, Rebecca S.; Irwin, Jodi A.; Parson, Walther
2015-01-01
Long an important and useful tool in forensic genetic investigations, mitochondrial DNA (mtDNA) typing continues to mature. Research in the last few years has demonstrated both that data from the entire molecule will have practical benefits in forensic DNA casework, and that massively parallel sequencing (MPS) methods will make full mitochondrial genome (mtGenome) sequencing of forensic specimens feasible and cost-effective. A spate of recent studies has employed these new technologies to assess intraindividual mtDNA variation. However, in several instances, contamination and other sources of mixed mtDNA data have been erroneously identified as heteroplasmy. Well vetted mtGenome datasets based on both Sanger and MPS sequences have found authentic point heteroplasmy in approximately 25% of individuals when minor component detection thresholds are in the range of 10–20%, along with positional distribution patterns in the coding region that differ from patterns of point heteroplasmy in the well-studied control region. A few recent studies that examined very low-level heteroplasmy are concordant with these observations when the data are examined at a common level of resolution. In this review we provide an overview of considerations related to the use of MPS technologies to detect mtDNA heteroplasmy. In addition, we examine published reports on point heteroplasmy to characterize features of the data that will assist in the evaluation of future mtGenome data developed by any typing method. PMID:26009256
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
Beck, Cheryl Tatano; LoGiudice, Jenna; Gable, Robert K
2015-01-01
Secondary traumatic stress (STS) is an occupational hazard for clinicians who can experience symptoms of posttraumatic stress disorder (PTSD) from exposure to their traumatized patients. The purpose of this mixed-methods study was to determine the prevalence and severity of STS in certified nurse-midwives (CNMs) and to explore their experiences attending traumatic births. A convergent, parallel mixed-methods design was used. The American Midwifery Certification Board sent out e-mails to all their CNM members with a link to the SurveyMonkey study. The STS Scale was used to collect data for the quantitative strand. For the qualitative strand, participants were asked to describe their experiences of attending one or more traumatic births. IBM SPSS 21.0 (Version 21.0, Armonk, NY) was used to analyze the quantitative data, and Krippendorff content analysis was the method used to analyze the qualitative data. The sample consisted of 473 CNMs who completed the quantitative portion and 246 (52%) who completed the qualitative portion. In this sample, 29% of the CNMs reported high to severe STS, and 36% screened positive for the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition diagnostic criteria for PTSD due to attending traumatic births. The top 3 types of traumatic births described by the CNMs were fetal demise/neonatal death, shoulder dystocia, and infant resuscitation. Content analysis revealed 6 themes: 1) protecting my patients: agonizing sense of powerlessness and helplessness; 2) wreaking havoc: trio of posttraumatic stress symptoms; 3) circling the wagons: it takes a team to provide support … or not; 4) litigation: nowhere to go to unburden our souls; (5) shaken belief in the birth process: impacting midwifery practice; and 6 moving on: where do I go from here? The midwifery profession should acknowledge STS as a professional risk. © 2015 by the American College of Nurse-Midwives.
Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less
Halász, István Zoltán; Bárány, Tamás
2016-08-24
In this work, the effect of mixing temperature (T mix ) on the mechanical, rheological, and morphological properties of rubber/cyclic butylene terephthalate (CBT) oligomer compounds was studied. Apolar (styrene butadiene rubber, SBR) and polar (acrylonitrile butadiene rubber, NBR) rubbers were modified by CBT (20 phr) for reinforcement and viscosity reduction. The mechanical properties were determined in tensile, tear, and dynamical mechanical analysis (DMTA) tests. The CBT-caused viscosity changes were assessed by parallel-plate rheometry. The morphology was studied by scanning electron microscopy (SEM). CBT became better dispersed in the rubber matrices with elevated mixing temperatures (at which CBT was in partially molten state), which resulted in improved tensile properties. With increasing mixing temperature the size of the CBT particles in the compounds decreased significantly, from few hundred microns to 5-10 microns. Compounding at temperatures above 120 °C and 140 °C for NBR and SBR, respectively, yielded reduced tensile mechanical properties most likely due to the degradation of the base rubber. The viscosity reduction by CBT was more pronounced in mixes with coarser CBT dispersions prepared at lower mixing temperatures.
Spreadsheet Calculation of Jets in Crossflow: Opposed Rows of Slots Slanted at 45 Degrees
NASA Technical Reports Server (NTRS)
Holderman, James D.; Clisset, James R.; Moder, Jeffrey P.
2011-01-01
The purpose of this study was to extend a baseline empirical model to the case of jets entering the mainstream flow from opposed rows of 45 degrees slanted slots. The results in this report were obtained using a spreadsheet modified from the one posted with NASA/TM--2010-216100. The primary conclusion in this report is that the best mixing configuration for opposed rows of 45 degrees slanted slots at any down stream distance is a parallel staggered configuration where the slots are angled in the same direction on top and bottom walls and one side is shifted by half the orifice spacing. Although distributions from perpendicular slanted slots are similar to those from parallel staggered configurations at some downstream locations, results for perpendicular slots are highly dependent on downstream distance and are no better than parallel staggered slots at locations where they are similar and are worse than parallel ones at other distances.
DOT National Transportation Integrated Search
2013-04-01
A longitudinal joint is the interface between two adjacent and parallel hot-mix asphalt (HMA) mats. Inadequate joint construction can lead to a location where water can penetrate the pavement layers and reduce the structural support of the underlying...
Computation and Dynamics: Classical and Quantum
NASA Astrophysics Data System (ADS)
Kisil, Vladimir V.
2010-05-01
We discuss classical and quantum computations in terms of corresponding Hamiltonian dynamics. This allows us to introduce quantum computations which involve parallel processing of both: the data and programme instructions. Using mixed quantum-classical dynamics we look for a full cost of computations on quantum computers with classical terminals.
Robertson, Angela M; Garfein, Richard S; Wagner, Karla D; Mehta, Sanjay R; Magis-Rodriguez, Carlos; Cuevas-Mota, Jazmine; Moreno-Zuniga, Patricia Gonzalez; Strathdee, Steffanie A
2014-02-12
Policymakers and researchers seek answers to how liberalized drug policies affect people who inject drugs (PWID). In response to concerns about the failing "war on drugs," Mexico recently implemented drug policy reforms that partially decriminalized possession of small amounts of drugs for personal use while promoting drug treatment. Recognizing important epidemiologic, policy, and socioeconomic differences between the United States-where possession of any psychoactive drugs without a prescription remains illegal-and Mexico-where possession of small quantities for personal use was partially decriminalized, we sought to assess changes over time in knowledge, attitudes, behaviors, and infectious disease profiles among PWID in the adjacent border cities of San Diego, CA, USA, and Tijuana, Baja California, Mexico. Based on extensive binational experience and collaboration, from 2012-2014 we initiated two parallel, prospective, mixed methods studies: Proyecto El Cuete IV in Tijuana (n = 785) and the STAHR II Study in San Diego (n = 575). Methods for sampling, recruitment, and data collection were designed to be compatible in both studies. All participants completed quantitative behavioral and geographic assessments and serological testing (HIV in both studies; hepatitis C virus and tuberculosis in STAHR II) at baseline and four semi-annual follow-up visits. Between follow-up assessment visits, subsets of participants completed qualitative interviews to explore contextual factors relating to study aims and other emergent phenomena. Planned analyses include descriptive and inferential statistics for quantitative data, content analysis and other mixed-methods approaches for qualitative data, and phylogenetic analysis of HIV-positive samples to understand cross-border transmission dynamics. Investigators and research staff shared preliminary findings across studies to provide feedback on instruments and insights regarding local phenomena. As a result, recruitment and data collection procedures have been implemented successfully, demonstrating the importance of binational collaboration in evaluating the impact of structural-level drug policy reforms on the behaviors, health, and wellbeing of PWID across an international border. Our prospective, mixed methods approach allows each study to be responsive to emerging phenomena within local contexts while regular collaboration promotes sharing insights across studies. The strengths and limitations of this approach may serve as a guide for other evaluations of harm reduction policies internationally.
Microfluidic mixing using orbiting magnetic microbeads
NASA Astrophysics Data System (ADS)
Ballard, Matthew; Owen, Drew; Mao, Wenbin; Hesketh, Peter; Alexeev, Alexander
2013-11-01
Using three-dimensional simulations and experiments, we examine mixing in a microfluidic channel that incorporates a hybrid passive-active micromixer. The passive part of the mixer consists of a series of angled parallel ridges lining the top microchannel wall. The active component of the mixer is made up of microbeads rotating around small pillars on the bottom of the microchannel. In our simulations, we use a binary fluid lattice Boltzmann model to simulate the system and characterize the microfluidic mixing in the system. We consider the passive and active micromixers separately and evaluate their combined effect on the mixing of binary fluids. We compare our simulations with the experimental results obtained in a microchannel with magnetically actuated microbeads. Our findings guide the design of an efficient micromixer to be used in sampling in complex fluids. Financial support from NSF (CBET-1159726) is gratefully acknowledged.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.
An asymptotic analysis of supersonic reacting mixing layers
NASA Technical Reports Server (NTRS)
Jackson, T. L.; Hussaini, M. Y.
1987-01-01
The purpose of this paper is to present an asymptotic analysis of the laminar mixing of the simultaneous chemical reaction between parallel supersonic streams of two reacting species. The study is based on a one-step irreversible Arrhenius reaction and on large activation energy asymptotics. Essentially it extends the work of Linan and Crespo to include the effect of free shear and Mach number on the ignition regime, the deflagration regime and the diffusion flame regime. It is found that the effective parameter is the product of the characteristic Mach number and a shear parameter.
Soliton interactions and complexes for coupled nonlinear Schrödinger equations.
Jiang, Yan; Tian, Bo; Liu, Wen-Jun; Sun, Kun; Li, Min; Wang, Pan
2012-03-01
Under investigation in this paper are the coupled nonlinear Schrödinger (CNLS) equations, which can be used to govern the optical-soliton propagation and interaction in such optical media as the multimode fibers, fiber arrays, and birefringent fibers. By taking the 3-CNLS equations as an example for the N-CNLS ones (N≥3), we derive the analytic mixed-type two- and three-soliton solutions in more general forms than those obtained in the previous studies with the Hirota method and symbolic computation. With the choice of parameters for those soliton solutions, soliton interactions and complexes are investigated through the asymptotic and graphic analysis. Soliton interactions and complexes with the bound dark solitons in a mode or two modes are observed, including that (i) the two bright solitons display the breatherlike structures while the two dark ones stay parallel, (ii) the two bright and dark solitons all stay parallel, and (iii) the states of the bound solitons change from the breatherlike structures to the parallel one even with the distance between those solitons smaller than that before the interaction with the regular one soliton. Asymptotic analysis is also used to investigate the elastic and inelastic interactions between the bound solitons and the regular one soliton. Furthermore, some discussions are extended to the N-CNLS equations (N>3). Our results might be helpful in such applications as the soliton switch, optical computing, and soliton amplification in the nonlinear optics.
Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.
2014-01-01
The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327
CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2015-04-01
We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
Making Macroscopic Assemblies of Aligned Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Smalley, Richard E.; Colbert, Daniel T.; Smith, Ken A.; Walters, Deron A.; Casavant, Michael J.; Qin, Xiaochuan; Yakobson, Boris; Hauge, Robert H.; Saini, Rajesh Kumar; Chiung, Wan-Ting;
2005-01-01
A method of aligning and assembling single-wall carbon nanotubes (SWNTs) to fabricate macroscopic structures has been invented. The method entails suspending SWNTs in a fluid, orienting the SWNTs by use of a magnetic and/or electric field, and then removing the aligned SWNTs from suspension in such a way as to assemble them while maintaining the alignment. SWNTs are essentially tubular extensions of fullerene molecules. It is desirable to assemble aligned SWNTs into macroscopic structures because the common alignment of the SWNTs in such a structure makes it possible to exploit, on a macroscopic scale, the unique mechanical, chemical, and electrical properties that individual oriented SWNTs exhibit at the molecular level. Because of their small size and high electrical conductivity, carbon nanotubes, and especially SWNTs, are useful for making electrical connectors in integrated circuits. Carbon nanotubes can be used as antennas at optical frequencies, and as probes in scanning tunneling microscopes, atomic-force microscopes, and the like. Carbon nanotubes can be used with or instead of carbon black in tires. Carbon nanotubes are useful as supports for catalysts. Ropes of SWNTs are metallic and, as such, are potentially useful in some applications in which electrical conductors are needed - for example, they could be used as additives in formulating electrically conductive paints. Finally, macroscopic assemblies of aligned SWNTs can serve as templates for the growth of more and larger structures of the same type. The great variety of tubular fullerene molecules and of the structures that could be formed by assembling them in various ways precludes a complete description of the present method within the limits of this article. It must suffice to present a typical example of the use of one of many possible variants of the method to form a membrane comprising SWNTs aligned substantially parallel to each other in the membrane plane. The apparatus used in this variant of the method (see figure) includes a reservoir containing SWNTs dispersed in a suspending agent (for example, dimethylformamide) and a reservoir containing a suitable solvent (for example, water mixed with a surfactant). By use of either pressurized gas supplied from upstream or suction from downstream, the suspension of SWNTs and the solvent are forced to mix and flow into a tank. A filter inside the tank contains pores small enough to prevent the passage of most SWNTs, but large enough to allow the passage of molecules of the solvent and suspending agent. The filter is oriented perpendicular to the flow path. A magnetic field parallel to the plane of the filter is applied. The success of the method is based on the tendency of SWNTs to become aligned with their longitudinal axes parallel to an applied magnetic field. The alignment energy of an SWNT increases with the length of the SWNT and the magnetic-field strength. In order to obtain an acceptably small degree of statistical deviation of SWNTs of a given length from alignment with a magnetic field, one must make the field strong enough so that the thermal energy associated with rotation of an SWNT away from alignment is less than the alignment energy.
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
Atkinson, Quentin D; Gray, Russell D
2005-08-01
In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.
The Oxidized Low-Density Lipoprotein Receptor Mediates Vascular Effects of Inhaled Vehicle Emissions
Lucero, JoAnn; Harman, Melissa; Madden, Michael C.; McDonald, Jacob D.; Seagrave, Jean Clare; Campen, Matthew J.
2011-01-01
Rationale: To determine vascular signaling pathways involved in inhaled air pollution (vehicular engine emission) exposure–induced exacerbation of atherosclerosis that are associated with onset of clinical cardiovascular events. Objectives: To elucidate the role of oxidized low-density lipoprotein (oxLDL) and its primary receptor on endothelial cells, the lectin-like oxLDL receptor (LOX-1), in regulation of endothelin-1 expression and matrix metalloproteinase activity associated with inhalational exposure to vehicular engine emissions. Methods: Atherosclerotic apolipoprotein E knockout mice were exposed by inhalation to filtered air or mixed whole engine emissions (250 μg particulate matter [PM]/m3 diesel + 50 μg PM/m3 gasoline exhausts) 6 h/d for 7 days. Concurrently, mice were treated with either mouse IgG or neutralizing antibodies to LOX-1 every other day. Vascular and plasma markers of oxidative stress and expression proatherogenic factors were assessed. In a parallel study, healthy human subjects were exposed to either 100 μg PM/m3 diesel whole exhaust or high-efficiency particulate air and charcoal-filtered “clean” air (control subjects) for 2 hours, on separate occasions. Measurements and Main Results: Mixed emissions exposure increased oxLDL and vascular reactive oxygen species, as well as LOX-1, matrix metalloproteinase-9, and endothelin-1 mRNA expression and also monocyte/macrophage infiltration, each of which was attenuated with LOX-1 antibody treatment. In a parallel study, diesel exhaust exposure in volunteer human subjects induced significant increases in plasma-soluble LOX-1. Conclusions: These findings demonstrate that acute exposure to vehicular source pollutants results in up-regulation of vascular factors associated with progression of atherosclerosis, endothelin-1, and matrix metalloproteinase-9, mediated through oxLDL–LOX-1 receptor signaling, which may serve as a novel target for future therapy. PMID:21493736
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
ERIC Educational Resources Information Center
Vasquez-Mireles, Selina; West, Sandra
2007-01-01
A correlated science lesson is characterized as an integrated science lesson in that it may incorporate traditionally integrated activities and use math as a tool. However, a correlated math-science lesson also: (1) has the pertinent math and science objectives aligned with state standards; and (2) teaches parallel science and math ideas equally.…
Huang, Jianhua
2012-07-01
There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.
Methods of parallel computation applied on granular simulations
NASA Astrophysics Data System (ADS)
Martins, Gustavo H. B.; Atman, Allbens P. F.
2017-06-01
Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.
Parallel Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Baggag, Abdalkader; Atkins, Harold; Keyes, David
1999-01-01
This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.
ERIC Educational Resources Information Center
Çokluk, Ömay; Koçak, Duygu
2016-01-01
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
Development and Application of a Parallel LCAO Cluster Method
NASA Astrophysics Data System (ADS)
Patton, David C.
1997-08-01
CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
NASA Astrophysics Data System (ADS)
Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru
2007-02-01
Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.
Thein, Z M; Smaranayake, Y H; Smaranayake, L P
2007-11-01
Despite the increasing recognition of the role played by mixed species biofilms in health and disease, the behavior and factors modulating these biofilms remain elusive. We therefore compared the effect of serum, two dietary sugars (sucrose and galactose) and a biocide, chlorhexidine digluconate, on a dual species biofilm (DSB) of Candida albicans and Escherichia coli and, their single species biofilm (SSB) counterparts. Both modes of biofilm growth on polystyrene plastic surfaces were quantified using a viable cell count method and visualized using confocal scanning laser microscopy (CSLM). Present data indicate that co-culture of C. albicans with varying initial concentrations of E. coli leads to a significant inhibition of yeast growth (r=-0.964; p<0.001). Parallel ultrastructural studies using CSLM and a Live/Dead stain confirmed that E. coli growth rendered blastospores and hyphal yeasts non-viable in DSB. SSB of C. albicans showed pronounced growth when its growth surface was pretreated with serum and by sugar supplements in the incubating medium (p<0.05). Intriguingly, C. albicans in DSB was more resistant to the antiseptic effect of chlorhexidine digluconate. Taken together, the current data elucidate some features of the colonization resistance offered by bacteria in mixed bacterial/fungal habitats and how such phenomena may contribute to the development of fungal superinfection during antimicrobial therapy.
Newton-like methods for Navier-Stokes solution
NASA Astrophysics Data System (ADS)
Qin, N.; Xu, X.; Richards, B. E.
1992-12-01
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
NASA Astrophysics Data System (ADS)
Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang
2018-04-01
The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.
Fukatsu, H; Nohara, K; Kotani, Y; Tanaka, N; Matsuno, K; Sakai, T
2015-08-01
It is known that solid food is transported to the pharynx actively in parallel to it being crushed by chewing and mixed with saliva in the oral cavity. Therefore, food bolus formation should be considered to take place from the oral cavity to the pharynx. In previous studies, the chewed food was evaluated after the food had been removed from the oral cavity. However, it has been pointed out that spitting food out of the oral cavity interferes with natural food bolus formation. Therefore, we observed food boluses immediately before swallowing using an endoscope to establish a method to evaluate the food bolus-forming function, and simultaneously performed endoscopic evaluation of food bolus formation and its relationship with the number of chewing cycles. The subject was inserted the endoscope nasally and instructed to eat two coloured samples of boiled rice simultaneously in two ingestion conditions ('as usual' and 'chewing well'). The condition of the food bolus was graded into three categories for each item of grinding, mixing and aggregation and scored 2, 1 and 0. The score of aggregation was high under both ingestion conditions. The scores of grinding and mixing tended to be higher in subjects with a high number of chewing cycles, and the score of aggregation was high regardless of the number of chewing cycles. It was suggested that food has to be aggregated, even though the number of chewing cycles is low and the food is not ground or mixed for a food bolus to reach the swallowing threshold. © 2015 John Wiley & Sons Ltd.
Numerical Simulation of Shock-Dispersed Fuel Charges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bell, John B.; Day, Marcus; Beckner, Vincent
Successfully attacking underground storage facilities for chemical and biological (C/B) weapons is an important mission area for the Department of Defense. The fate of a C/B agent during an attack depends critically on the pressure and thermal environment that the agent experiences. The initial environment is determined by the blast wave from an explosive device. The byproducts of the detonation provide a fuel source that burn when mixed with oxidizer (after burning). Additional energy can be released by the ignition of the C/B agent as it mixes with the explosion products and the air in the chamber. Hot plumes ventingmore » material from any openings in the chamber can provide fuel for additional energy release when mixed with additional oxidizer. Assessment of the effectiveness of current explosives as well as the development of new explosive systems requires a detailed understanding of all of these modes of energy release. Using methodologies based on the use of higher-order Godunov schemes combined with Adaptive Mesh Refinement (AMR), implemented in a parallel adaptive framework suited to the massively parallel computer systems provided by the DOD High-Performance Computing Modernization program, we use a suite of programs to develop predictive models for the simulation of the energetics of blast waves, deflagration waves and ejecta plumes. The programs use realistic reaction kinetic and thermodynamic models provided by standard components (such as CHEMKIN) as well as other novel methods to model enhanced explosive devices. The work described here focuses on the validation of these models against a series of bomb calorimetry experiments performed at the Ernst-Mach Institute. In this paper, we present three-dimensional simulations of the experiments, examining the explosion dynamics and the role of subsequent burning on the explosion products on the thermal and pressure environment within the calorimeter. The effects of burning are quantified by comparing two sets of computations, one in which the calorimeter is filled with nitrogen so there is no after burning and a second in which the calorimeter contains air.« less
Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie
2014-01-01
It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.
NASA Technical Reports Server (NTRS)
Chen, Li-Jen; Hesse, Michael; Wang, Shan; Gershman, Daniel; Ergun, Robert; Pollock, Craig; Torbert, Roy; Bessho, Naoki; Daughton, William; Dorelli, John;
2016-01-01
Measurements from the Magnetospheric Multiscale (MMS) mission are reported to show distinct features of electron energization and mixing in the diffusion region of the terrestrial magnetopause reconnection. At the ion jet and magnetic field reversals, distribution functions exhibiting signatures of accelerated meandering electrons are observed at an electron out-of-plane flow peak. The meandering signatures manifested as triangular and crescent structures are established features of the electron diffusion region (EDR). Effects of meandering electrons on the electric field normal to the reconnection layer are detected. Parallel acceleration and mixing of the inflowing electrons with exhaust electrons shape the exhaust flow pattern. In the EDR vicinity, the measured distribution functions indicate that locally, the electron energization and mixing physics is captured by two-dimensional reconnection, yet to account for the simultaneous four-point measurements, translational invariant in the third dimension must be violated on the ion-skin-depth scale.
NASA Astrophysics Data System (ADS)
Chen, Li-Jen; Hesse, Michael; Wang, Shan; Gershman, Daniel; Ergun, Robert; Pollock, Craig; Torbert, Roy; Bessho, Naoki; Daughton, William; Dorelli, John; Giles, Barbara; Strangeway, Robert; Russell, Christopher; Khotyaintsev, Yuri; Burch, Jim; Moore, Thomas; Lavraud, Benoit; Phan, Tai; Avanov, Levon
2016-06-01
Measurements from the Magnetospheric Multiscale (MMS) mission are reported to show distinct features of electron energization and mixing in the diffusion region of the terrestrial magnetopause reconnection. At the ion jet and magnetic field reversals, distribution functions exhibiting signatures of accelerated meandering electrons are observed at an electron out-of-plane flow peak. The meandering signatures manifested as triangular and crescent structures are established features of the electron diffusion region (EDR). Effects of meandering electrons on the electric field normal to the reconnection layer are detected. Parallel acceleration and mixing of the inflowing electrons with exhaust electrons shape the exhaust flow pattern. In the EDR vicinity, the measured distribution functions indicate that locally, the electron energization and mixing physics is captured by two-dimensional reconnection, yet to account for the simultaneous four-point measurements, translational invariant in the third dimension must be violated on the ion-skin-depth scale.
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
1999-01-01
This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
An asymptotic induced numerical method for the convection-diffusion-reaction equation
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.; Sorensen, Danny C.
1988-01-01
A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-20
... December 18, 2012, (77 FR 74820), EPA proposed to approve through parallel processing Tennessee's October... well as changes to future vehicle mix assumptions, that influence the emission estimations. TDEC has... 2014. \\2\\ A safety margin is the difference between the attainment level of emissions from all source...
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
Parallel manipulation of individual magnetic microbeads for lab-on-a-chip applications
NASA Astrophysics Data System (ADS)
Peng, Zhengchun
Many scientists and engineers are turning to lab-on-a-chip systems for faster and cheaper analysis of chemical reactions and biomolecular interactions. A common approach that facilitates the handling of reagents and biomolecules in these systems utilizes micro/nano beads as the solid carrier. Physical manipulation, such as assembly, transport, sorting, and tweezing, of beads on a chip represents an essential step for fully utilizing their potentials in a wide spectrum of bead-based analysis. Previous work demonstrated manipulation of either an ensemble of beads without individual control, or single beads but lacks the capability for parallel operation. Parallel manipulation of individual beads is required to meet the demand for high-throughput and location-specific analysis. In this work, we introduced two methods for parallel manipulation of individual magnetic microbeads, which can serve as effective lab-on-a-chip platforms and/or efficient analytic tools. The first method employs arrays of soft ferromagnetic patterns fabricated inside a microfluidic channel and subjected to an external magnetic field. We demonstrated that the system can be used to assemble individual beads (1-3 mum) from a flow of suspended beads into a regular array on the chip, hence improving the integrated electrochemical detection of biomolecules bound to the bead surface. By rotating the external field, the assembled microbeads can be remotely controlled with synchronized, high-speed circular motion around individual soft magnets on the chip. We employed this manipulation mode for efficient sample mixing in continuous microflow. Furthermore, we discovered a simple but effective way of transporting the microbeads on the chip by varying the strength of the local bias field within a revolution of the external field. In addition, selective transport of microbeads with different size was realized, providing a platform for effective on-chip sample separation and offering the potential for multiplexing capability. The second method integrates magnetic and dielectrophoretic manipulations of the same microbeads. The device combines tapered conducting wires and fingered electrodes to generate desirable magnetic and electric fields, respectively. By externally programming the magnetic attraction and dielectrophoretic repulsion forces, out-of-plane oscillation of the microbeads across the channel height was realized. This manipulation mode can facilitate the interaction between the beads with multiple layers of sample fluid inside the channel. We further demonstrated the tweezing of microbeads in liquid with high spatial resolutions, i.e., from submicrometer to nanometer range, by fine-tuning the net force from magnetic attraction and dielectrophoretic repulsion of the beads. The highresolution control of the out-of-plane motion of the microbeads led to the invention of massively parallel biomolecular tweezers. We believe the maturation of bead-based microtweezers will revolutionize the state-of-art tools currently used for single cell and single molecule studies.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Histological assessment of the triangular fibrocartilage complex.
Semisch, M; Hagert, E; Garcia-Elias, M; Lluch, A; Rein, S
2016-06-01
The morphological structure of the seven components of triangular fibrocartilage complexes of 11 cadaver wrists of elderly people was assessed microscopically, after staining with Hematoxylin-Eosin and Elastica van Gieson. The articular disc consisted of tight interlaced fibrocartilage without blood vessels except in its ulnar part. Volar and dorsal radioulnar ligaments showed densely parallel collagen bundles. The subsheath of the extensor carpi ulnaris muscle, the ulnotriquetral and ulnolunate ligament showed mainly mixed tight and loose parallel tissue. The ulnolunate ligament contained tighter parallel collagen bundles and clearly less elastic fibres than the ulnotriquetral ligament. The ulnocarpal meniscoid had an irregular morphological composition and loose connective tissue predominated. The structure of the articular disc indicates a buffering function. The tight structure of radioulnar and ulnolunate ligaments reflects a central stabilizing role, whereas the ulnotriquetral ligament and ulnocarpal meniscoid have less stabilizing functions. © The Author(s) 2015.
LSPRAY-III: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2008-01-01
LSPRAY-III is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-III, we have advanced the state-of-the-art in spray computations in several important ways.
LSPRAY-II: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2004-01-01
LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.
NASA Astrophysics Data System (ADS)
Ergun, R. E.; Holmes, J. C.; Goodrich, K. A.; Wilder, F. D.; Stawarz, J. E.; Eriksson, S.; Newman, D. L.; Schwartz, S. J.; Goldman, M. V.; Sturner, A. P.; Malaspina, D. M.; Usanova, M. E.; Torbert, R. B.; Argall, M.; Lindqvist, P.-A.; Khotyaintsev, Y.; Burch, J. L.; Strangeway, R. J.; Russell, C. T.; Pollock, C. J.; Giles, B. L.; Dorelli, J. J. C.; Avanov, L.; Hesse, M.; Chen, L. J.; Lavraud, B.; Le Contel, O.; Retino, A.; Phan, T. D.; Eastwood, J. P.; Oieroset, M.; Drake, J.; Shay, M. A.; Cassak, P. A.; Nakamura, R.; Zhou, M.; Ashour-Abdalla, M.; André, M.
2016-06-01
We report observations from the Magnetospheric Multiscale satellites of large-amplitude, parallel, electrostatic waves associated with magnetic reconnection at the Earth's magnetopause. The observed waves have parallel electric fields (E||) with amplitudes on the order of 100 mV/m and display nonlinear characteristics that suggest a possible net E||. These waves are observed within the ion diffusion region and adjacent to (within several electron skin depths) the electron diffusion region. They are in or near the magnetosphere side current layer. Simulation results support that the strong electrostatic linear and nonlinear wave activities appear to be driven by a two stream instability, which is a consequence of mixing cold (<10 eV) plasma in the magnetosphere with warm (~100 eV) plasma from the magnetosheath on a freshly reconnected magnetic field line. The frequent observation of these waves suggests that cold plasma is often present near the magnetopause.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
Ghotane, S G; Harrison, V; Radcliffe, E; Jones, E; Gallagher, J E
2017-05-12
Background The need for periodontal management is great and increasing; thus, the oral and dental workforce should be suitably equipped to deliver contemporary care. Health Education London developed a training scheme to extend the skills of dentists and dental care professionals (DCPs).Aim To examine the feasibility of assessing a skill-mix initiative established to enhance skills in clinical periodontology involving the views of patients, clinicians and key stakeholders, together with clinical and patient outcomes in London.Methods This mixed methods feasibility and pilot study involved four parallel elements: a postal questionnaire survey of patients; analysis of clinical logbooks; self-completion questionnaire survey of clinicians; and semi-structured interviews of key stakeholders, including clinicians.Results Twelve of the 19 clinicians participated in the evaluation, returning completed questionnaires (63%) and providing access to log diaries and patients. Periodontal data from 42 log-diary cases (1,103 teeth) revealed significant improvement in clinical outcomes (P = 0.001 for all). Eighty-four percent (N = 99) of the 142 patients returning a questionnaire reported improved dental health; however, responses from hospital patients greatly exceeded those from dental practice. Interviews (N = 22) provided evidence that the programme contributed to professional healthcare across four key domains: 'service', 'quality care', 'professional' and 'educational'. Clinicians, while supportive of the concept, raised concerns regarding the mismatch of their expectations and its educational and service outcomes.Discussion The findings suggest that it is feasible to deliver and evaluate inter-professional extended skills training for dentists and dental care professionals, and this may be evaluated using mixed methods to examine outcomes including clinical log diaries, patient questionnaires and stakeholder interviews. This inter-professional course represents a positive development for patient care using the expertise of different members of the dental team; however, its formal integration to the health and educational sectors require further consideration.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M
2014-09-01
Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.
NASA Astrophysics Data System (ADS)
Zerr, Robert Joseph
2011-12-01
The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)
Mariën, Peter; Abutalebi, Jubin; Engelborghs, Sebastiaan; De Deyn, Peter P
2005-12-01
Acquired aphasia after circumscribed vascular subcortical lesions has not been reported in bilingual children. We report clinical and neuroimaging findings in an early bilingual boy who incurred equally severe transcortical sensory aphasia in his first language (L1) and second language (L2) after a posterior left thalamic hemorrhage. Following recurrent bleeding of the lesion the aphasic symptoms substantially aggravated. Spontaneous pathological language switching and mixing were found in both languages. Remission of these phenomena was reflected on brain perfusion SPECT revealing improved perfusion in the left frontal lobe and left caudate nucleus. The parallelism between the evolution of language symptoms and the SPECT findings may demonstrate that a subcortical left frontal lobe circuity is crucially involved in language switching and mixing.
Controllability of switched singular mix-valued logical control networks with constraints
NASA Astrophysics Data System (ADS)
Deng, Lei; Gong, Mengmeng; Zhu, Peiyong
2018-03-01
The present paper investigates the controllability problem of switched singular mix-valued logical control networks (SSMLCNs) with constraints on states and controls. First, using the semi-tenser product (STP) of matrices, the SSMLCN is expressed in an algebraic form, based on which a necessary and sufficient condition is given for the uniqueness of solution of SSMLCNs. Second, a necessary and sufficient criteria is derived for the controllability of constrained SSMLCNs, by converting a constrained SSMLCN into a parallel constrained switched mix-valued logical control network. Third, an algorithm is presented to design a proper switching sequence and a control scheme which force a state to a reachable state. Finally, a numerical example is given to demonstrate the efficiency of the results obtained in this paper.
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
O'Neill, Barbara J; Dwyer, Trudy; Reid-Searl, Kerry; Parkinson, Lynne
2018-03-01
To predict the factors that are most important in explaining nursing staff intentions towards early detection of the deteriorating health of a resident and providing subacute care in the nursing home setting. Nursing staff play a pivotal role in managing the deteriorating resident and determining whether the resident needs to be transferred to hospital or remain in the nursing home; however, there is a dearth of literature that explains the factors that influence their intentions. This information is needed to underpin hospital avoidance programs that aim to enhance nursing confidence and skills in this area. A convergent parallel mixed-methods study, using the theory of planned behaviour as a framework. Surveys and focus groups were conducted with nursing staff (n = 75) at a 94-bed nursing home at two points in time, prior to and following the implementation of a hospital avoidance program. The quantitative and qualitative data were analysed separately and merged during final analysis. Nursing staff had strong intentions, a positive attitude that became significantly more positive with the hospital avoidance program in place, and a reasonable sense of control; however, the influence of important referents was the strongest predictor of intention towards managing residents with deteriorating health. Support from a hospital avoidance program empowered staff and increased confidence to intervene. The theory of planned behaviour served as an effective framework for identifying the strong influence referents had on nursing staff intentions around managing residents with deteriorating health. Although nursing staff had a reasonable sense of control over this area of their work, they believed they benefitted from a hospital avoidance program initiated by the nursing home. Managers implementing hospital avoidance programs should consider the role of referents, appraise the known barriers and facilitators and take steps to identify those unique to their local situation. All levels of nursing staff play a role in preventing hospitalisation and should be consulted in the design, implementation and evaluation of any hospital avoidance strategies. © 2017 John Wiley & Sons Ltd.
Thompson, Tom P; Callaghan, Lynne; Hazeldine, Emma; Quinn, Cath; Walker, Samantha; Byng, Richard; Wallace, Gary; Creanor, Siobhan; Green, Colin; Hawton, Annie; Annison, Jill; Sinclair, Julia; Senior, Jane; Taylor, Adrian H
2018-01-01
Introduction People with experience of the criminal justice system typically have worse physical and mental health, lower levels of mental well-being and have less healthy lifestyles than the general population. Health trainers have worked with offenders in the community to provide support for lifestyle change, enhance mental well-being and signpost to appropriate services. There has been no rigorous evaluation of the effectiveness and cost-effectiveness of providing such community support. This study aims to determine the feasibility and acceptability of conducting a randomised trial and delivering a health trainer intervention to people receiving community supervision in the UK. Methods and analysis A multicentre, parallel, two-group randomised controlled trial recruiting 120 participants with 1:1 individual allocation to receive support from a health trainer and usual care or usual care alone, with mixed methods process evaluation. Participants receive community supervision from an offender manager in either a Community Rehabilitation Company or the National Probation Service. If they have served a custodial sentence, then they have to have been released for at least 2 months. The supervision period must have at least 7 months left at recruitment. Participants are interested in receiving support to change diet, physical activity, alcohol use and smoking and/or improve mental well-being. The primary outcome is mental well-being with secondary outcomes related to smoking, physical activity, alcohol consumption and diet. The primary outcome will inform sample size calculations for a definitive trial. Ethics and dissemination The study has been approved by the Health and Care Research Wales Ethics Committee (REC reference 16/WA/0171). Dissemination will include publication of the intervention development process and findings for the stated outcomes, parallel process evaluation and economic evaluation in peer-reviewed journals. Results will also be disseminated to stakeholders and trial participants. Trial registration numbers ISRCTN80475744; Pre-results. PMID:29866736
Cloud identification using genetic algorithms and massively parallel computation
NASA Technical Reports Server (NTRS)
Buckles, Bill P.; Petry, Frederick E.
1996-01-01
As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user's manual was written and distributed nationwide to scientists whose work might benefit from its availability. Several papers, including two journal articles, were produced.
3-D phononic crystals with ultra-wide band gaps
Lu, Yan; Yang, Yang; Guest, James K.; Srivastava, Ankit
2017-01-01
In this paper gradient based topology optimization (TO) is used to discover 3-D phononic structures that exhibit ultra-wide normalized all-angle all-mode band gaps. The challenging computational task of repeated 3-D phononic band-structure evaluations is accomplished by a combination of a fast mixed variational eigenvalue solver and distributed Graphic Processing Unit (GPU) parallel computations. The TO algorithm utilizes the material distribution-based approach and a gradient-based optimizer. The design sensitivity for the mixed variational eigenvalue problem is derived using the adjoint method and is implemented through highly efficient vectorization techniques. We present optimized results for two-material simple cubic (SC), body centered cubic (BCC), and face centered cubic (FCC) crystal structures and show that in each of these cases different initial designs converge to single inclusion network topologies within their corresponding primitive cells. The optimized results show that large phononic stop bands for bulk wave propagation can be achieved at lower than close packed spherical configurations leading to lighter unit cells. For tungsten carbide - epoxy crystals we identify all angle all mode normalized stop bands exceeding 100%, which is larger than what is possible with only spherical inclusions. PMID:28233812
3-D phononic crystals with ultra-wide band gaps.
Lu, Yan; Yang, Yang; Guest, James K; Srivastava, Ankit
2017-02-24
In this paper gradient based topology optimization (TO) is used to discover 3-D phononic structures that exhibit ultra-wide normalized all-angle all-mode band gaps. The challenging computational task of repeated 3-D phononic band-structure evaluations is accomplished by a combination of a fast mixed variational eigenvalue solver and distributed Graphic Processing Unit (GPU) parallel computations. The TO algorithm utilizes the material distribution-based approach and a gradient-based optimizer. The design sensitivity for the mixed variational eigenvalue problem is derived using the adjoint method and is implemented through highly efficient vectorization techniques. We present optimized results for two-material simple cubic (SC), body centered cubic (BCC), and face centered cubic (FCC) crystal structures and show that in each of these cases different initial designs converge to single inclusion network topologies within their corresponding primitive cells. The optimized results show that large phononic stop bands for bulk wave propagation can be achieved at lower than close packed spherical configurations leading to lighter unit cells. For tungsten carbide - epoxy crystals we identify all angle all mode normalized stop bands exceeding 100%, which is larger than what is possible with only spherical inclusions.
Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert
2016-07-01
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jianqiu; Yang, Yu; Wu, Fangzhen
Synchrotron X-ray Topography is a powerful technique to study defects structures particularly dislocation configurations in single crystals. Complementing this technique with geometrical and contrast analysis can enhance the efficiency of quantitatively characterizing defects. In this study, the use of Synchrotron White Beam X-ray Topography (SWBXT) to determine the line directions of threading dislocations in 4H–SiC axial slices (sample cut parallel to the growth axis from the boule) is demonstrated. This technique is based on the fact that the projected line directions of dislocations on different reflections are different. Another technique also discussed is the determination of the absolute Burgers vectorsmore » of threading mixed dislocations (TMDs) using Synchrotron Monochromatic Beam X-ray Topography (SMBXT). This technique utilizes the fact that the contrast from TMDs varies on SMBXT images as their Burgers vectors change. By comparing observed contrast with the contrast from threading dislocations provided by Ray Tracing Simulations, the Burgers vectors can be determined. Thereafter the distribution of TMDs with different Burgers vectors across the wafer is mapped and investigated.« less
Numerical Modeling of Mixing and Venting from Explosions in Bunkers
NASA Astrophysics Data System (ADS)
Liu, Benjamin
2005-07-01
2D and 3D numerical simulations were performed to study the dynamic interaction of explosion products in a concrete bunker with ambient air, stored chemical or biological warfare (CBW) agent simulant, and the surrounding walls and structure. The simulations were carried out with GEODYN, a multi-material, Godunov-based Eulerian code, that employs adaptive mesh refinement and runs efficiently on massively parallel computer platforms. Tabular equations of state were used for all materials with the exception of any high explosives employed, which were characterized with conventional JWL models. An appropriate constitutive model was used to describe the concrete. Interfaces between materials were either tracked with a volume-of-fluid method that used high-order reconstruction to specify the interface location and orientation, or a capturing approach was employed with the assumption of local thermal and mechanical equilibrium. A major focus of the study was to estimate the extent of agent heating that could be obtained prior to venting of the bunker and resultant agent dispersal. Parameters investigated included the bunker construction, agent layout, energy density in the bunker and the yield-to-agent mass ratio. Turbulent mixing was found to be the dominant heat transfer mechanism for heating the agent.
Exploring Social Justice in Mixed/Divided Cities: From Local to Global Learning.
Shdaimah, Corey; Lipscomb, Jane; Strier, Roni; Postan-Aizik, Dassi; Leviton, Susan; Olsen, Jody
University of Haifa and the University of Maryland, Baltimore faculty developed a parallel binational, interprofessional American-Israeli course which explores social justice in the context of increasing urban, local, and global inequities. This article describes the course's innovative approach to critically examine how social justice is framed in mixed/divided cities from different professional perspectives (social work, health, law). Participatory methods such as photo-voice, experiential learning, and theatre of the oppressed provide students with a shared language and multiple media to express and problematize their own and others' understanding of social (in)justice and to imagine social change. Much learning about "self" takes place in an immersion experience with "others." Crucial conversations about "the other" and social justice can occur more easily within the intercultural context. In these conversations, students and faculty experience culture as diverse, complex, and personal. Students and faculty alike found the course personally and professionally transformative. Examination of social justice in Haifa and Baltimore strengthened our appreciation for the importance of context and the value of global learning to provide insights on local challenges and opportunities. Copyright © 2016 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolme, David S; Mikkilineni, Aravind K; Rose, Derek C
Analog computational circuits have been demonstrated to provide substantial improvements in power and speed relative to digital circuits, especially for applications requiring extreme parallelism but only modest precision. Deep machine learning is one such area and stands to benefit greatly from analog and mixed-signal implementations. However, even at modest precisions, offsets and non-linearity can degrade system performance. Furthermore, in all but the simplest systems, it is impossible to directly measure the intermediate outputs of all sub-circuits. The result is that circuit designers are unable to accurately evaluate the non-idealities of computational circuits in-situ and are therefore unable to fully utilizemore » measurement results to improve future designs. In this paper we present a technique to use deep learning frameworks to model physical systems. Recently developed libraries like TensorFlow make it possible to use back propagation to learn parameters in the context of modeling circuit behavior. Offsets and scaling errors can be discovered even for sub-circuits that are deeply embedded in a computational system and not directly observable. The learned parameters can be used to refine simulation methods or to identify appropriate compensation strategies. We demonstrate the framework using a mixed-signal convolution operator as an example circuit.« less
Moser, Aline; Wüthrich, Daniel; Bruggmann, Rémy; Eugster-Meier, Elisabeth; Meile, Leo; Irmler, Stefan
2017-01-01
The advent of massive parallel sequencing technologies has opened up possibilities for the study of the bacterial diversity of ecosystems without the need for enrichment or single strain isolation. By exploiting 78 genome data-sets from Lactobacillus helveticus strains, we found that the slpH locus that encodes a putative surface layer protein displays sufficient genetic heterogeneity to be a suitable target for strain typing. Based on high-throughput slpH gene sequencing and the detection of single-base DNA sequence variations, we established a culture-independent method to assess the biodiversity of the L. helveticus strains present in fermented dairy food. When we applied the method to study the L. helveticus strain composition in 15 natural whey cultures (NWCs) that were collected at different Gruyère, a protected designation of origin (PDO) production facilities, we detected a total of 10 sequence types (STs). In addition, we monitored the development of a three-strain mix in raclette cheese for 17 weeks. PMID:28775722
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Zou, Yu; Sun, Yunxiang; Zhu, Yuzhen; Ma, Buyong; Nussinov, Ruth; Zhang, Qingwen
2016-03-16
The aggregation of the copper-zinc superoxide dismutase (SOD1) protein is linked to familial amyotrophic lateral sclerosis, a progressive neurodegenerative disease. A recent experimental study has shown that the (147)GVIGIAQ(153) SOD1 C-terminal segment not only forms amyloid fibrils in isolation but also accelerates the aggregation of full-length SOD1, while substitution of isoleucine at site 149 by proline blocks its fibril formation. Amyloid formation is a nucleation-polymerization process. In this study, we investigated the oligomerization and the nucleus structure of this heptapeptide. By performing extensive replica-exchange molecular dynamics (REMD) simulations and conventional MD simulations, we found that the GVIGIAQ hexamers can adopt highly ordered bilayer β-sheets and β-barrels. In contrast, substitution of I149 by proline significantly reduces the β-sheet probability and results in the disappearance of bilayer β-sheet structures and the increase of disordered hexamers. We identified mixed parallel-antiparallel bilayer β-sheets in both REMD and conventional MD simulations and provided the conformational transition from the experimentally observed parallel bilayer sheets to the mixed parallel-antiparallel bilayer β-sheets. Our simulations suggest that the critical nucleus consists of six peptide chains and two additional peptide chains strongly stabilize this critical nucleus. The stabilized octamer is able to recruit additional random peptides into the β-sheet. Therefore, our simulations provide insights into the critical nucleus formation and the smallest stable nucleus of the (147)GVIGIAQ(153) peptide.
Parallel Fast Multipole Method For Molecular Dynamics
2007-06-01
Parallel Fast Multipole Method For Molecular Dynamics THESIS Reid G. Ormseth, Captain, USAF AFIT/GAP/ENP/07-J02 DEPARTMENT OF THE AIR FORCE AIR...the United States Government. AFIT/GAP/ENP/07-J02 Parallel Fast Multipole Method For Molecular Dynamics THESIS Presented to the Faculty Department of...has also been provided by ‘The Art of Molecular Dynamics Simulation ’ by Dennis Rapaport. This work is the clearest treatment of the Fast Multipole
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Parallelization of PANDA discrete ordinates code using spatial decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.
2006-07-01
We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less
Pluye, Pierre; Hong, Quan Nha
2014-01-01
This article provides an overview of mixed methods research and mixed studies reviews. These two approaches are used to combine the strengths of quantitative and qualitative methods and to compensate for their respective limitations. This article is structured in three main parts. First, the epistemological background for mixed methods will be presented. Afterward, we present the main types of mixed methods research designs and techniques as well as guidance for planning, conducting, and appraising mixed methods research. In the last part, we describe the main types of mixed studies reviews and provide a tool kit and examples. Future research needs to offer guidance for assessing mixed methods research and reporting mixed studies reviews, among other challenges.
Bishop, Felicity L
2015-02-01
To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Qian; Pavlik, Jeffrey W.; Silvernail, Nathan J.
The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(T pFPP)(1-MeIm)(NO)] (T pFPP = tetra- para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicularmore » to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v 50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X= N, C, and O) complexes is correlated with the Fe XO bond lengths. The nature of highest frequency band at ≈560 cm -1 has also been examined in two additional new derivatives. Previously assigned as the Fe NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. In conclusion, the results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.« less
Peng, Qian; Pavlik, Jeffrey W.; Silvernail, Nathan J.; ...
2016-03-21
The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(T pFPP)(1-MeIm)(NO)] (T pFPP = tetra- para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicularmore » to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v 50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X= N, C, and O) complexes is correlated with the Fe XO bond lengths. The nature of highest frequency band at ≈560 cm -1 has also been examined in two additional new derivatives. Previously assigned as the Fe NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. In conclusion, the results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.« less
NASA Astrophysics Data System (ADS)
Liu, Chaoyang; Zhao, Yanhui; Wang, Zhenguo; Wang, Hongbo; Sun, Mingbo
2017-07-01
The interaction between sonic transverse jet and supersonic crossflow coupled with a cavity flameholder is investigated using large eddy simulation (LES), where the compressible flow dynamics and fuel mixing mechanism are analyzed emphatically. An adaptive central-upwind 6th-order weighted essentially non-oscillatory (WENO-CU6) scheme along with multi-threaded and multi-process MPI/OpenMP parallel is adopted to improve the accuracy and parallel efficiency of the solver. This simulation aims to reproduce the flow conditions in the experiment, and the results show fairly good agreement with the experimental data for distributions of streamwise and normal velocity components. Instantaneous structures such as the shock, large scale vortices and recirculation zone are identified, and their spatial deformation and temporal evolution are presented to reveal the effect on the subsequent mixing. Then some time-averaged and statistical results are obtained to explain the interesting phenomenon observed in the experiment, that there are two pairs of counter-rotating streamwise vortices existing in and above the cavity with the same rotation direction. The above pair is induced by the transverse momentum of jet in supersonic crossflow, which is so-called counter-rotating vortices (CRVs) in the flat-plate injection. On account of the entrainment, the reflux in the cavity transports to the core of jet wakes, and then another pair of counter-rotating streamwise vortices is formed below with the effect of cavity. A pair of trailing CRVs is generated at the trailing edge of cavity, and the turbulent kinetic energy (TKE) here is obviously higher than that in other regions. To some extent, the cavity can enhance the mixing, but will not bring excess total pressure loss.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
Guthrie, K. M.; Lanoye, A.; Tate, D. F.; Robichaud, E.; Caccavale, L. J.; Wing, R. R.
2016-01-01
Summary Objective Emerging adults ages 18–25 are at high risk for obesity, but are markedly underrepresented in behavioural weight loss (BWL) programs and experience lower engagement and retention relative to older adults. Purpose To utilize a mixed methods approach to inform future efforts to effectively recruit and engage this high‐risk population in BWL programs. Methods We used a convergent parallel design in which quantitative and qualitative data were given equal priority. Study 1 (N = 137, age = 21.8 + 2.2, BMI = 30.1 + 4.7) was a quantitative survey, conducted online to reduce known barriers and minimize bias. Study 2 (N = 7 groups, age = 22.3 + 2.2, BMI = 31.5 + 4.6) was a qualitative study, consisting of in person focus groups to gain greater depth and identify contextual factors unable to be captured in Study 1. Results Weight loss was of interest, but weight itself was not a central motivation; an emphasis on overall lifestyle, self‐improvement and fitness emerged as driving factors. Key barriers were time, motivation and money. Recruitment processes should be primarily online with messages tailored specifically to motivations and preferences of this age group. Preferences for a program were reduced intensity and brief, hybrid format with some in‐person contact, individual level coaching, experiential learning and peer support. Key methods of promoting engagement and retention were autonomy and choice, money and creating an optimal default. Conclusions An individually tailored lifestyle intervention that addresses a spectrum of health behaviours, promotes autonomy and emphasizes activity and fitness may facilitate recruitment and engagement in this population better than traditional BWL protocols. PMID:28090339
Guo, Qiaohong; Chochinov, Harvey Max; McClement, Susan; Thompson, Genevieve; Hack, Tom
2018-01-01
Effective patient-family communication can reduce patients' psychosocial distress and relieve family members' current suffering and their subsequent grief. However, terminally ill patients and their family members often experience great difficulty in communicating their true feelings, concerns, and needs to each other. To develop a novel means of facilitating meaningful conversations for palliative patients and family members, coined Dignity Talk, explore anticipated benefits and challenges of using Dignity Talk, and solicit suggestions for protocol improvement. A convergent parallel mixed-methods design. Dignity Talk, a self-administered question list, was designed to prompt end-of-life conversations, adapted from the Dignity Therapy question framework. Participants were surveyed to evaluate the Dignity Talk question framework. Data were analyzed using qualitative and quantitative methods. A total of 20 palliative patients, 20 family members, and 34 healthcare providers were recruited from two inpatient palliative care units in Winnipeg, Canada. Most Dignity Talk questions were endorsed by the majority of patients and families (>70%). Dignity Talk was revised to be convenient and flexible to use, broadly accessible, clearly stated, and sensitively worded. Participants felt Dignity Talk would be valuable in promoting conversations, enhancing family connections and relationships, enhancing patient sense of value and dignity, promoting effective interaction, and attending to unfinished business. Participants suggested that patients and family members be given latitude to respond only to questions that are meaningful to them and within their emotional capacity to broach. Dignity Talk may provide a gentle means of facilitating important end-of-life conversations.
Guo, Qiaohong; Chochinov, Harvey Max; McClement, Susan; Thompson, Genevieve; Hack, Tom
2017-01-01
Background: Effective patient–family communication can reduce patients’ psychosocial distress and relieve family members’ current suffering and their subsequent grief. However, terminally ill patients and their family members often experience great difficulty in communicating their true feelings, concerns, and needs to each other. Aim: To develop a novel means of facilitating meaningful conversations for palliative patients and family members, coined Dignity Talk, explore anticipated benefits and challenges of using Dignity Talk, and solicit suggestions for protocol improvement. Design: A convergent parallel mixed-methods design. Dignity Talk, a self-administered question list, was designed to prompt end-of-life conversations, adapted from the Dignity Therapy question framework. Participants were surveyed to evaluate the Dignity Talk question framework. Data were analyzed using qualitative and quantitative methods. Setting/participants: A total of 20 palliative patients, 20 family members, and 34 healthcare providers were recruited from two inpatient palliative care units in Winnipeg, Canada. Results: Most Dignity Talk questions were endorsed by the majority of patients and families (>70%). Dignity Talk was revised to be convenient and flexible to use, broadly accessible, clearly stated, and sensitively worded. Participants felt Dignity Talk would be valuable in promoting conversations, enhancing family connections and relationships, enhancing patient sense of value and dignity, promoting effective interaction, and attending to unfinished business. Participants suggested that patients and family members be given latitude to respond only to questions that are meaningful to them and within their emotional capacity to broach. Conclusion: Dignity Talk may provide a gentle means of facilitating important end-of-life conversations. PMID:29130367
2016-05-11
AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
The application of mixed methods designs to trauma research.
Creswell, John W; Zhang, Wanqing
2009-12-01
Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.
Parallel multiphase microflows: fundamental physics, stabilization methods and applications.
Aota, Arata; Mawatari, Kazuma; Kitamori, Takehiko
2009-09-07
Parallel multiphase microflows, which can integrate unit operations in a microchip under continuous flow conditions, are discussed. Fundamental physics, stabilization methods and some applications are shown.
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Parallel pivoting combined with parallel reduction
NASA Technical Reports Server (NTRS)
Alaghband, Gita
1987-01-01
Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.
Steady and unsteady calculations on thermal striping phenomena in triple-parallel jet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y. Q.; Merzari, E.; Thomas, J. W.
2017-02-01
The phenomenon of thermal striping is encountered in liquid metal cooled fast reactors (LMFR), in which temperature fluctuation due to convective mixing between hot and cold fluids can lead to a possibility of crack initiation and propagation in the structure due to high cycle thermal fatigue. Using sodium experiments of parallel triple jets configuration performed by Japan Atomic Energy Agency (JAEA) as benchmark, numerical simulations were carried out to evaluate the temperature fluctuation characteristics in fluid and the transfer characteristics of temperature fluctuation from fluid to structure, which is important to assess the potential thermal fatigue damage. In this study,more » both steady (RANS) and unsteady (URANS, LES) methods were applied to predict the temperature fluctuations of thermal striping. The parametric studies on the effects of mesh density and boundary conditions on the accuracy of the overall solutions were also conducted. The velocity, temperature and temperature fluctuation intensity distribution were compared with the experimental data. As expected, steady calculation has limited success in predicting the thermal–hydraulic characteristics of the thermal striping, highlighting the limitations of the RANS approach in unsteady heat transfer simulations. The unsteady results exhibited reasonably good agreement with experimental results for temperature fluctuation intensity, as well as the average temperature and velocity components at the measurement locations.« less
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
Ben Jomaa, Meriam; Chebbi, Hammouda; Fakhar Bourguiba, Noura; Zid, Mohamed Faouzi
2018-02-01
The synthesis of p -toluidinium perchlorate (systematic name: 4-methyl-anilinium perchlorate), C 7 H 10 N + ·ClO 4 - , was carried out from an aqueous reaction of perchloric acid with p -toluidine. This compound was characterized by powder XRD, IR and UV-Vis spectroscopy. The structure was further confirmed by a single-crystal X-ray diffraction study. The crystal structure is formed by a succession of two-dimensional mol-ecular layers consisting of perchlorate anions and organic cations parallel to the (100) plane and located at x = 2 n + ½ ( n ∈ Z ). Each mixed layer is formed by infinite chains {C 7 H 10 N + ·ClO 4 - } n parallel to the [010] direction and developing along the c axis, generating R 2 4 (8), R 2 2 (4) and R 4 4 (12) graph-set motifs. The results of a theoretical study using the DFT method at the B3LYP/6-311++G(d,p) level are in good agreement with the experimental data. Hirshfeld surface and fingerprint plots reveal that the structure is dominated by O⋯H/H⋯O (54.2%), H⋯H (26.9%) and C-H ⋯π (14.3%) contacts. The studied crystal was refined as a two-component twin.
Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems
NASA Technical Reports Server (NTRS)
Chen, Hsin-Chu; He, Ai-Fang
1993-01-01
The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.
Scott, Anthony; Jeon, Sung-Hee; Joyce, Catherine M; Humphreys, John S; Kalb, Guyonne; Witt, Julia; Leahy, Anne
2011-09-05
Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.
1993-01-01
The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.
Halász, István Zoltán; Bárány, Tamás
2016-01-01
In this work, the effect of mixing temperature (Tmix) on the mechanical, rheological, and morphological properties of rubber/cyclic butylene terephthalate (CBT) oligomer compounds was studied. Apolar (styrene butadiene rubber, SBR) and polar (acrylonitrile butadiene rubber, NBR) rubbers were modified by CBT (20 phr) for reinforcement and viscosity reduction. The mechanical properties were determined in tensile, tear, and dynamical mechanical analysis (DMTA) tests. The CBT-caused viscosity changes were assessed by parallel-plate rheometry. The morphology was studied by scanning electron microscopy (SEM). CBT became better dispersed in the rubber matrices with elevated mixing temperatures (at which CBT was in partially molten state), which resulted in improved tensile properties. With increasing mixing temperature the size of the CBT particles in the compounds decreased significantly, from few hundred microns to 5–10 microns. Compounding at temperatures above 120 °C and 140 °C for NBR and SBR, respectively, yielded reduced tensile mechanical properties most likely due to the degradation of the base rubber. The viscosity reduction by CBT was more pronounced in mixes with coarser CBT dispersions prepared at lower mixing temperatures. PMID:28773841
Inertial instabilities in a mixing-separating microfluidic device
NASA Astrophysics Data System (ADS)
Domingues, Allysson; Poole, Robert; Dennis, David
2017-11-01
Combining and separating fluids has many industrial and biomedical applications. This numerical and experimental study explores inertial instabilities in a so-called mixing-separating cell micro-geometry which could potentiality be used to enhance mixing. Our microfluidic mixing-separating cell consists of two straight square parallel channels with flow from opposite directions with a central gap that allows the streams to interact, mix or remain separate (often referred to as the `H' geometry). A stagnation point is generated at the centre of symmetry due to the two opposed inlets and outlets. Under creeping flow conditions (Reynolds number [ Re 0 ]) the flow is steady, two-dimensional and produces a sharp symmetric boundary between fluids stream entering the geometry from opposite directions. For Re > 30 , an inertial instability appears which leads to the generation of a central vortex and the breaking of symmetry, although the flow remains steady. As Re increases the central vortex divides into two vortices. Our experimental and numerical investigations both show the same phenomena. The results suggest that the effect observed can be exploited to enhance mixing in biomedical or other applications. Work supported by CNPq Grant 203195/2014-0.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Dynamic multistation photometer
Bauer, Martin L.; Johnson, Wayne F.; Lakomy, Dale G.
1977-01-01
A portable fast analyzer is provided that uses a magnetic clutch/brake to rapidly accelerate the analyzer rotor, and employs a microprocessor for automatic analyzer operation. The rotor is held stationary while the drive motor is run up to speed. When it is desired to mix the sample(s) and reagent(s), the brake is deenergized and the clutch is energized wherein the rotor is very rapidly accelerated to the running speed. The parallel path rotor that is used allows the samples and reagents to be mixed the moment they are spun out into the rotor cuvetes and data acquisition begins immediately. The analyzer will thus have special utility for fast reactions.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1995-01-01
A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method,' is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. The present method and the Arbitrary Lagrangian-Eulerian (ALE) method have a similarity in spirit-eliminating the cross-streamline numerical diffusion. For this purpose, we suggest a simple grid constraint condition and utilize an accurate discretization procedure. This grid constraint is only applied to the transverse cell face parallel to the local stream velocity, and hence our method for the steady state problems naturally reduces to the streamline-curvature method, without explicitly solving the steady stream-coordinate equations formulated a priori. Unlike the Lagrangian method proposed by Loh and Hui which is valid only for steady supersonic flows, the present method is general and capable of treating subsonic flows and supersonic flows as well as unsteady flows, simply by invoking in the same code an appropriate grid constraint suggested in this paper. The approach is found to be robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.
Falk Delgado, Alberto; Falk Delgado, Anna
2017-07-26
Describe the prevalence and types of conflicts of interest (COI) in published randomized controlled trials (RCTs) in general medical journals with a binary primary outcome and assess the association between conflicts of interest and favorable outcome. Parallel-group RCTs with a binary primary outcome published in three general medical journals during 2013-2015 were identified. COI type, funding source, and outcome were extracted. Binomial logistic regression model was performed to assess association between COI and funding source with outcome. A total of 509 consecutive parallel-group RCTs were included in the study. COI was reported in 74% in mixed funded RCTs and in 99% in for-profit funded RCTs. Stock ownership was reported in none of the non-profit RCTs, in 7% of mixed funded RCTs, and in 50% of for-profit funded RCTs. Mixed-funded RCTs had employees from the funding company in 11% and for-profit RCTs in 76%. Multivariable logistic regression revealed that stock ownership in the funding company among any of the authors was associated with a favorable outcome (odds ratio = 3.53; 95% confidence interval = 1.59-7.86; p < 0.01). COI in for-profit funded RCTs is extensive, because the factors related to COI are not fully independent, a multivariable analysis should be cautiously interpreted. However, after multivariable adjustment only stock ownership from the funding company among authors is associated with a favorable outcome.
Warmer, deeper, and greener mixed layers in the North Atlantic subpolar gyre over the last 50 years.
Martinez, Elodie; Raitsos, Dionysios E; Antoine, David
2016-02-01
Shifts in global climate resonate in plankton dynamics, biogeochemical cycles, and marine food webs. We studied these linkages in the North Atlantic subpolar gyre (NASG), which hosts extensive phytoplankton blooms. We show that phytoplankton abundance increased since the 1960s in parallel to a deepening of the mixed layer and a strengthening of winds and heat losses from the ocean, as driven by the low frequency of the North Atlantic Oscillation (NAO). In parallel to these bottom-up processes, the top-down control of phytoplankton by copepods decreased over the same time period in the western NASG, following sea surface temperature changes typical of the Atlantic Multi-decadal Oscillation (AMO). While previous studies have hypothesized that climate-driven warming would facilitate seasonal stratification of surface waters and long-term phytoplankton increase in subpolar regions, here we show that deeper mixed layers in the NASG can be warmer and host a higher phytoplankton biomass. These results emphasize that different modes of climate variability regulate bottom-up (NAO control) and top-down (AMO control) forcing on phytoplankton at decadal timescales. As a consequence, different relationships between phytoplankton, zooplankton, and their physical environment appear subject to the disparate temporal scale of the observations (seasonal, interannual, or decadal). The prediction of phytoplankton response to climate change should be built upon what is learnt from observations at the longest timescales. © 2015 John Wiley & Sons Ltd.
Rheology and Extrusion of Cement-Fly Ashes Pastes
NASA Astrophysics Data System (ADS)
Micaelli, F.; Lanos, C.; Levita, G.
2008-07-01
The addition of fly ashes in cement pastes is tested to optimize the forming of cement based material by extrusion. Two sizes of fly ashes grains are examinated. The rheology of concentrated suspensions of ashes mixes is studied with a parallel plates rheometer. In stationary flow state, tested suspensions viscosities are satisfactorily described by the Krieger-Dougherty model. An "overlapped grain" suspensions model able to describe the bimodal suspensions behaviour is proposed. For higher values of solid volume fraction, Bingham viscoplastic behaviour is identified. Results showed that the plastic viscosity and plastic yield values present minimal values for the same optimal formulation of bimodal mixes. The rheological study is extended to more concentrated systems using an extruder. Finally it is observed that the addition of 30% vol. of optimized ashes mix determined a significant reduction of required extrusion load.
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
NASA Technical Reports Server (NTRS)
Boulet, C.; Ma, Q.
2016-01-01
Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
ERIC Educational Resources Information Center
Jordan, John; Wachsmann, Melanie; Hoisington, Susan; Gonzalez, Vanessa; Valle, Rachel; Lambert, Jarod; Aleisa, Majed; Wilcox, Rachael; Benge, Cindy L.; Onwuegbuzie, Anthony J.
2017-01-01
Surprisingly, scant information exists regarding the collaboration patterns of mixed methods researchers. Thus, the purpose of this mixed methods bibliometric study was to examine (a) the distribution of the number of co-authors in articles published in the flagship mixed methods research journal (i.e., "Journal of Mixed Methods…
The Value of Mixed Methods Research: A Mixed Methods Study
ERIC Educational Resources Information Center
McKim, Courtney A.
2017-01-01
The purpose of this explanatory mixed methods study was to examine the perceived value of mixed methods research for graduate students. The quantitative phase was an experiment examining the effect of a passage's methodology on students' perceived value. Results indicated students scored the mixed methods passage as more valuable than those who…
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
A Mixed Methods Sampling Methodology for a Multisite Case Study
ERIC Educational Resources Information Center
Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie
2012-01-01
The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…
La, Moonwoo; Park, Sang Min; Kim, Dong Sung
2015-01-01
In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516
NASA Technical Reports Server (NTRS)
Wang, P.; Li, P.
1998-01-01
A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.
Mixed Methods Research: The "Thing-ness" Problem.
Hesse-Biber, Sharlene
2015-06-01
Contemporary mixed methods research (MMR) veers away from a "loosely bounded" to a "bounded" concept that has important negative implications for how qualitatively driven mixed methods approaches are positioned in the field of mixed methods and overall innovation in the praxis of MMR. I deploy the concept of reification defined as taking an object/abstraction and treating it as if it were real such that it takes on the quality of "thing-ness," having a concrete independent existence. I argue that the contemporary reification of mixed methods as a "thing" is fueled by three interrelated factors: (a) the growing formalization of mixed methods as design, (b) the unexamined belief in the "synergy" of mixed methods and, (c) the deployment of a "practical pragmatism" as the "philosophical partner" for mixed methods inquiry. © The Author(s) 2015.
Lee, Sang Ki; Kim, Kap Jung; Park, Kyung Hoon; Choy, Won Sik
2014-10-01
With the continuing improvements in implants for distal humerus fractures, it is expected that newer types of plates, which are anatomically precontoured, thinner and less irritating to soft tissue, would have comparable outcomes when used in a clinical study. The purpose of this study was to compare the clinical and radiographic outcomes in patients with distal humerus fractures who were treated with orthogonal and parallel plating methods using precontoured distal humerus plates. Sixty-seven patients with a mean age of 55.4 years (range 22-90 years) were included in this prospective study. The subjects were randomly assigned to receive 1 of 2 treatments: orthogonal or parallel plating. The following results were assessed: operating time, time to fracture union, presence of a step or gap at the articular margin, varus-valgus angulation, functional recovery, and complications. No intergroup differences were observed based on radiological and clinical results between the groups. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes, mean operation time, union time, or complication rates. There were no cases of fracture nonunion in either group; heterotrophic ossification was found 3 patients in orthogonal plating group and 2 patients in parallel plating group. In our practice, no significant differences were found between the orthogonal and parallel plating methods in terms of clinical outcomes or complication rates. However, orthogonal plating method may be preferred in cases of coronal shear fractures, where posterior to anterior fixation may provide additional stability to the intraarticular fractures. Additionally, parallel plating method may be the preferred technique used for fractures that occur at the most distal end of the humerus.
NASA Astrophysics Data System (ADS)
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Method for implementation of recursive hierarchical segmentation on parallel computers
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2005-01-01
A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
Address tracing for parallel machines
NASA Technical Reports Server (NTRS)
Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent
1991-01-01
Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.
Solution of a tridiagonal system of equations on the finite element machine
NASA Technical Reports Server (NTRS)
Bostic, S. W.
1984-01-01
Two parallel algorithms for the solution of tridiagonal systems of equations were implemented on the Finite Element Machine. The Accelerated Parallel Gauss method, an iterative method, and the Buneman algorithm, a direct method, are discussed and execution statistics are presented.
Parallelization of an Object-Oriented Unstructured Aeroacoustics Solver
NASA Technical Reports Server (NTRS)
Baggag, Abdelkader; Atkins, Harold; Oezturan, Can; Keyes, David
1999-01-01
A computational aeroacoustics code based on the discontinuous Galerkin method is ported to several parallel platforms using MPI. The discontinuous Galerkin method is a compact high-order method that retains its accuracy and robustness on non-smooth unstructured meshes. In its semi-discrete form, the discontinuous Galerkin method can be combined with explicit time marching methods making it well suited to time accurate computations. The compact nature of the discontinuous Galerkin method also makes it well suited for distributed memory parallel platforms. The original serial code was written using an object-oriented approach and was previously optimized for cache-based machines. The port to parallel platforms was achieved simply by treating partition boundaries as a type of boundary condition. Code modifications were minimal because boundary conditions were abstractions in the original program. Scalability results are presented for the SCI Origin, IBM SP2, and clusters of SGI and Sun workstations. Slightly superlinear speedup is achieved on a fixed-size problem on the Origin, due to cache effects.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Exploring partners' perspectives on participation in heart failure home care: a mixed-method design.
Näsström, Lena; Luttik, Marie Louise; Idvall, Ewa; Strömberg, Anna
2017-05-01
To describe the partners' perspectives on participation in the care for patients with heart failure receiving home care. Partners are often involved in care of patients with heart failure and have an important role in improving patients' well-being and self-care. Partners have described both negative and positive experiences of involvement, but knowledge of how partners of patients with heart failure view participation in care when the patient receives home care is lacking. A convergent parallel mixed-method design was used, including data from interviews and questionnaires. A purposeful sample of 15 partners was used. Data collection lasted between February 2010 - December 2011. Interviews were analysed with content analysis and data from questionnaires (participation, caregiving, health-related quality of life, depressive symptoms) were analysed statistically. Finally, results were merged, interpreted and labelled as comparable and convergent or as being inconsistent. Partners were satisfied with most aspects of participation, information and contact. Qualitative findings revealed four different aspects of participation: adapting to the caring needs and illness trajectory, coping with caregiving demands, interacting with healthcare providers and need for knowledge to comprehend the health situation. Results showed confirmatory results that were convergent and expanded knowledge that gave a broader understanding of partner participation in this context. The results revealed different levels of partner participation. Heart failure home care included good opportunities for both participation and contact during home visits, necessary to meet partners' ongoing need for information to comprehend the situation. © 2016 John Wiley & Sons Ltd.
Meysenburg, Rebecca; Albrecht, Julie A; Litchfield, Ruth; Ritter-Gooder, Paula K
2014-02-01
Food preparers in families with young children are responsible for safe food preparation and handling to prevent foodborne illness. To explore the food safety perceptions, beliefs, and practices of primary food preparers in families with children 10 years of age and younger, a mixed methods convergent parallel design and constructs of the Health Belief Model were used. A random sampling of 72 primary food handlers (36.2±8.6 years of age, 88% female) within young families in urban and rural areas of two Midwestern states completed a knowledge survey and participated in ten focus groups. Quantitative data were analyzed using SPSS. Transcribed interviews were analyzed for codes and common themes. Forty-four percent scored less than the average knowledge score of 73%. Participants believe children are susceptible to foodborne illness but perceive its severity to be low with gastrointestinal discomfort as the primary outcome. Using safe food handling practices and avoiding inconveniences were benefits of preventing foodborne illness. Childcare duties, time and knowledge were barriers to practicing food safety. Confidence in preventing foodborne illness was high, especially when personal control over food handling is present. The low knowledge scores and reported practices revealed a false sense of confidence despite parental concern to protect their child from harm. Food safety messages that emphasize the susceptibility and severity of foodborne illness in children are needed to reach this audience for adoption of safe food handling practices. Published by Elsevier Ltd.
Bergenholtz, Heidi; Jarlbaek, Lene; Hølge-Hazelton, Bibi
2016-06-01
It can be challenging to provide generalist palliative care in hospitals, owing to difficulties in integrating disease-oriented treatment with palliative care and the influences of cultural and organisational conditions. However, knowledge on the interactions that occur is sparse. To investigate the interactions between organisation and culture as conditions for integrated palliative care in hospital and, if possible, to suggest workable solutions for the provision of generalist palliative care. A convergent parallel mixed-methods design was chosen using two independent studies: a quantitative study, in which three independent datasets were triangulated to study the organisation and evaluation of generalist palliative care, and a qualitative, ethnographic study exploring the culture of generalist palliative nursing care in medical departments. A Danish regional hospital with 29 department managements and one hospital management. Two overall themes emerged: (1) 'generalist palliative care as a priority at the hospital', suggesting contrasting issues regarding prioritisation of palliative care at different organisational levels, and (2) 'knowledge and use of generalist palliative care clinical guideline', suggesting that the guideline had not reached all levels of the organisation. Contrasting issues in the hospital's provision of generalist palliative care at different organisational levels seem to hamper the interactions between organisation and culture - interactions that appear to be necessary for the provision of integrated palliative care in the hospital. The implementation of palliative care is also hindered by the main focus being on disease-oriented treatment, which is reflected at all the organisational levels. © The Author(s) 2015.
Secondary Traumatic Stress in NICU Nurses: A Mixed-Methods Study.
Beck, Cheryl Tatano; Cusson, Regina M; Gable, Robert K
2017-12-01
Secondary traumatic stress is an occupational hazard for healthcare providers who care for patients who have been traumatized. This type of stress has been reported in various specialties of nursing, but no study to date had specifically focused on neonatal intensive care unit (NICU) nurses. (1) To determine the prevalence and severity of secondary traumatic stress in NICU nurses and (2) to explore those quantitative findings in more depth through nurses' qualitative descriptions of their traumatic experiences caring for critically ill infants in the NICU. Members of NANN were sent e-mails with a link to the electronic survey. In this mixed-methods study, a convergent parallel design was used. Neonatal nurses completed the Secondary Traumatic Stress Scale (STSS) and then described their traumatic experiences caring for critically ill infants in the NICU. SPSS version 24 and content analysis were used to analyze the quantitative and qualitative data, respectively. In this sample of 175 NICU nurses, 49% of the nurses' scores on the STSS indicated moderate to severe secondary traumatic stress. Analysis of the qualitative data revealed 5 themes that described NICU nurses' traumatic experiences caring for critically ill infants. NICU nurses need to know the signs of secondary traumatic stress that they may experience caring for their critically ill infants. Avenues for dealing with the stress should be provided. Future research with a higher response rate to increase the external validity of the findings to the population of neonatal nurses is needed.
Nord-Ljungquist, Helena; Brännström, Margareta; Bohm, Katarina
2015-07-01
In the event of a cardiac arrest, emergency medical dispatchers (EMDs) play a critical role by providing telephone-assisted cardiopulmonary resuscitation (T-CPR) to laypersons. The aim of our investigation was to describe compliance with the T-CPR protocol, the performance of the laypersons in a simulated T-CPR situation, and the communication between laypersons and EMDs during these actions. We conducted a retrospective observational study by analysing 20 recorded video and audio files. In a simulation, EMDs provided laypersons with instructions following T-CPR protocols. These were then analysed using a mixed method with convergent parallel design. If the EMDs complied with the T-CPR protocol, the laypersons performed the correct procedures in 71% of the actions. The single most challenging instruction of the T-CPR protocol, for both EMDs and laypersons, was airway control. Mean values for compression depth and frequency did not reach established guideline goals for CPR. Proper application of T-CPR protocols by EMDs resulted in better performance by laypersons in CPR. The most problematic task for EMDs as well for laypersons was airway management. The study results did not establish that the quality of communication between EMDs and laypersons performing CPR in a cardiac arrest situation led to statistically different outcomes, as measured by the quality and effectiveness of the CPR delivered. Copyright © 2014 Elsevier Ltd. All rights reserved.
Baldewijns, Karolien; Bektas, Sema; Boyne, Josiane; Rohde, Carla; De Maesschalck, Lieven; De Bleser, Leentje; Brandenburg, Vincent; Knackstedt, Christian; Devillé, Aleidis; Sanders-Van Wijk, Sandra; Brunner La Rocca, Hans-Peter
2017-12-01
Heart failure is a complex disease with poor outcome. This complexity may prevent care providers from covering all aspects of care. This could not only be relevant for individual patient care, but also for care organisation. Disease management programmes applying a multidisciplinary approach are recommended to improve heart failure care. However, there is a scarcity of research considering how disease management programme perform, in what form they should be offered, and what care and support patients and care providers would benefit most. Therefore, the Improving kNowledge Transfer to Efficaciously Raise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF) study aims to explore the current processes of heart failure care and to identify factors that may facilitate and factors that may hamper heart failure care and guideline adherence. Within a cross-sectional mixed method design in three regions of the North-West part of Europe, patients (n = 88) and their care providers (n = 59) were interviewed. Prior to the in-depth interviews, patients were asked to complete three questionnaires: The Dutch Heart Failure Knowledge scale, The European Heart Failure Self-care Behaviour Scale and The global health status and social economic status. In parallel, retrospective data based on records from these (n = 88) and additional patients (n = 82) are reviewed. All interviews were audiotaped and transcribed verbatim for analysis.
Boyne, Josiane; Rohde, Carla; De Maesschalck, Lieven; De Bleser, Leentje; Brandenburg, Vincent; Knackstedt, Christian; Devillé, Aleidis; Sanders-Van Wijk, Sandra; Brunner La Rocca, Hans-Peter
2017-01-01
Heart failure is a complex disease with poor outcome. This complexity may prevent care providers from covering all aspects of care. This could not only be relevant for individual patient care, but also for care organisation. Disease management programmes applying a multidisciplinary approach are recommended to improve heart failure care. However, there is a scarcity of research considering how disease management programme perform, in what form they should be offered, and what care and support patients and care providers would benefit most. Therefore, the Improving kNowledge Transfer to Efficaciously Raise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF) study aims to explore the current processes of heart failure care and to identify factors that may facilitate and factors that may hamper heart failure care and guideline adherence. Within a cross-sectional mixed method design in three regions of the North-West part of Europe, patients (n = 88) and their care providers (n = 59) were interviewed. Prior to the in-depth interviews, patients were asked to complete three questionnaires: The Dutch Heart Failure Knowledge scale, The European Heart Failure Self-care Behaviour Scale and The global health status and social economic status. In parallel, retrospective data based on records from these (n = 88) and additional patients (n = 82) are reviewed. All interviews were audiotaped and transcribed verbatim for analysis. PMID:29472989
ERIC Educational Resources Information Center
Powell, Heather; Mihalas, Stephanie; Onwuegbuzie, Anthony J.; Suldo, Shannon; Daley, Christine E.
2008-01-01
This article illustrates the utility of mixed methods research (i.e., combining quantitative and qualitative techniques) to the field of school psychology. First, the use of mixed methods approaches in school psychology practice is discussed. Second, the mixed methods research process is described in terms of school psychology research. Third, the…
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
Mixed Methods in CAM Research: A Systematic Review of Studies Published in 2012
Bishop, Felicity L.; Holmes, Michelle M.
2013-01-01
Background. Mixed methods research uses qualitative and quantitative methods together in a single study or a series of related studies. Objectives. To review the prevalence and quality of mixed methods studies in complementary medicine. Methods. All studies published in the top 10 integrative and complementary medicine journals in 2012 were screened. The quality of mixed methods studies was appraised using a published tool designed for mixed methods studies. Results. 4% of papers (95 out of 2349) reported mixed methods studies, 80 of which met criteria for applying the quality appraisal tool. The most popular formal mixed methods design was triangulation (used by 74% of studies), followed by embedded (14%), sequential explanatory (8%), and finally sequential exploratory (5%). Quantitative components were generally of higher quality than qualitative components; when quantitative components involved RCTs they were of particularly high quality. Common methodological limitations were identified. Most strikingly, none of the 80 mixed methods studies addressed the philosophical tensions inherent in mixing qualitative and quantitative methods. Conclusions and Implications. The quality of mixed methods research in CAM can be enhanced by addressing philosophical tensions and improving reporting of (a) analytic methods and reflexivity (in qualitative components) and (b) sampling and recruitment-related procedures (in all components). PMID:24454489
Simulating double-peak hydrographs from single storms over mixed-use watersheds
Yang Yang; Theodore A. Endreny; David J. Nowak
2015-01-01
Two-peak hydrographs after a single rain event are observed in watersheds and storms with distinct volumes contributing as fast and slow runoff. The authors developed a hydrograph model able to quantify these separate runoff volumes to help in estimation of runoff processes and residence times used by watershed managers. The model uses parallel application of two...
Active Learning of Geometrical Optics in High School: The ALOP Approach
ERIC Educational Resources Information Center
Alborch, Alejandra; Pandiella, Susana; Benegas, Julio
2017-01-01
A group comparison experiment of two high school classes with pre and post instruction testing has been carried out to study the suitability and advantages of using the active learning of optics and photonics (ALOP) curricula in high schools of developing countries. Two parallel, mixed gender, 12th grade classes of a high school run by the local…
Mixed Carrier Conduction in Modulation-doped Field Effect Transistors
NASA Technical Reports Server (NTRS)
Schacham, S. E.; Haugland, E. J.; Mena, R. A.; Alterovitz, S. A.
1995-01-01
The contribution of more than one carrier to the conductivity in modulation-doped field effect transistors (MODFET) affects the resultant mobility and complicates the characterization of these devices. Mixed conduction arises from the population of several subbands in the two-dimensional electron gas (2DEG), as well as the presence of a parallel path outside the 2DEG. We characterized GaAs/AlGaAs MODFET structures with both delta and continuous doping in the barrier. Based on simultaneous Hall and conductivity analysis we conclude that the parallel conduction is taking place in the AlGaAs barrier, as indicated by the carrier freezeout and activation energy. Thus, simple Hall analysis of these structures may lead to erroneous conclusions, particularly for real-life device structures. The distribution of the 2D electrons between the various confined subbands depends on the doping profile. While for a continuously doped barrier the Shubnikov-de Haas analysis shows superposition of two frequencies for concentrations below 10(exp 12) cm(exp -2), for a delta doped structure the superposition is absent even at 50% larger concentrations. This result is confirmed by self-consistent analysis, which indicates that the concentration of the second subband hardly increases.
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zeng, Yuting; Chen, Ming; Shen, Yunlong
2018-03-01
We have proposed a scheme of radio-over-fiber (RoF) system employing a dual-parallel Mach-Zehnder modulator (DP-MZM) based on four-wave mixing (FWM) in a semiconductor optical amplifier (SOA). In this scheme, the pump and the signal are generated by properly adjusting the direct current bias, modulation index of the DP-MZM, and the phase difference between the sub-MZMs. Because of the pump and the signal deriving from the same optical wave, the polarization states of the two lightwaves are copolarized. The single-pump FWM is polarization insensitive. After FWM and optical filtering, the optical millimeter-wave with octuple frequency is generated. About 40-GHz RoF system with a 2.5-Gbit / s signal is implemented by numerical simulation; the result shows that it has a good performance after the signal is transmitted over 40-km single-mode fiber. Then, the effects of the SOA's injection current and the carrier-to-sideband ratio on the system performance are discussed by simulation, and the optimum value for the system is obtained.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory
Ali, Usama S.; van Rijn, Peter W.
2015-01-01
Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.
A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Lim, Chieng-Fai
1991-01-01
The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, E.F.; Vrabec, J.
1981-10-26
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, Edward F.; Vrabec, John
1984-01-01
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
Jiang, Junfeng; Liu, Tiegen; Zhang, Yimo; Liu, Lina; Zha, Ying; Zhang, Fan; Wang, Yunxin; Long, Pin
2006-01-20
A parallel demodulation system for extrinsic Fabry-Perot interferometer (EFPI) and fiber Bragg grating (FBG) sensors is presented, which is based on a Michelson interferometer and combines the methods of low-coherence interference and a Fourier-transform spectrum. The parallel demodulation theory is modeled with Fourier-transform spectrum technology, and a signal separation method with an EFPI and FBG is proposed. The design of an optical path difference scanning and sampling method without a reference light is described. Experiments show that the parallel demodulation system has good spectrum demodulation and low-coherence interference demodulation performance. It can realize simultaneous strain and temperature measurements while keeping the whole system configuration less complex.
Dynamic modeling of parallel robots for computed-torque control implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codourey, A.
1998-12-01
In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less
Parallel heuristics for scalable community detection
Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth
2015-08-14
Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less
Brown, K M; Elliott, S J; Leatherdale, S T; Robertson-Wilson, J
2015-12-01
The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing rigour in quantitative and qualitative research, there is poor consensus regarding rigour in mixed methods. Using the empirical example of school-based obesity interventions, this methodological review examined how mixed methods have been used and reported, and how rigour has been addressed. Twenty-three peer-reviewed mixed methods studies were identified through a systematic search of five databases and appraised using the guidelines for Good Reporting of a Mixed Methods Study. In general, more detailed description of data collection and analysis, integration, inferences and justifying the use of mixed methods is needed. Additionally, improved reporting of methodological rigour is required. This review calls for increased discussion of practical techniques for establishing rigour in mixed methods research, beyond those for quantitative and qualitative criteria individually. A guide for reporting mixed methods research in population health should be developed to improve the reporting quality of mixed methods studies. Through improved reporting, mixed methods can provide strong evidence to inform policy and practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Arnault, Denise Saint; Fetters, Michael D.
2013-01-01
Mixed methods research has made significant in-roads in the effort to examine complex health related phenomenon. However, little has been published on the funding of mixed methods research projects. This paper addresses that gap by presenting an example of an NIMH funded project using a mixed methods QUAL-QUAN triangulation design entitled “The Mixed-Method Analysis of Japanese Depression.” We present the Cultural Determinants of Health Seeking model that framed the study, the specific aims, the quantitative and qualitative data sources informing the study, and overview of the mixing of the two studies. Finally, we examine reviewer's comments and our insights related to writing mixed method proposal successful for achieving RO1 level funding. PMID:25419196
Twelve tips for getting started using mixed methods in medical education research.
Lavelle, Ellen; Vuk, Jasna; Barber, Carolyn
2013-04-01
Mixed methods research, which is gaining popularity in medical education, provides a new and comprehensive approach for addressing teaching, learning, and evaluation issues in the field. The aim of this article is to provide medical education researchers with 12 tips, based on consideration of current literature in the health professions and in educational research, for conducting and disseminating mixed methods research. Engaging in mixed methods research requires consideration of several major components: the mixed methods paradigm, types of problems, mixed method designs, collaboration, and developing or extending theory. Mixed methods is an ideal tool for addressing a full range of problems in medical education to include development of theory and improving practice.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
NASA Astrophysics Data System (ADS)
Shariati, Maryam; Yortsos, Yannis; Talon, Laurent; Martin, Jerome; Rakotomalala, Nicole; Salin, Dominique
2003-11-01
We consider miscible displacement between parallel plates, where the viscosity is a function of the concentration. By selecting a piece-wise representation, the problem can be considered as ``three-phase'' flow. Assuming a lubrication-type approximation, the mathematical description is in terms of two quasi-linear hyperbolic equations. When the mobility of the middle phase is smaller than its neighbors, the system is genuinely hyperbolic and can be solved analytically. However, when it is larger, an elliptic region develops. This change-of-type behavior is for the first time proved here based on sound physical principles. Numerical solutions with a small diffusion are presented. Good agreement is obtained outside the elliptic region, but not inside, where the numerical results show unstable behavior. We conjecture that for the solution of the real problem in the mixed-type case, the full higher-dimensionality problem must be considered inside the elliptic region, in which the lubrication (parallel-flow) approximation is no longer appropriate. This is discussed in a companion presentation.
The Software Correlator of the Chinese VLBI Network
NASA Technical Reports Server (NTRS)
Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli
2010-01-01
The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.
Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber
NASA Technical Reports Server (NTRS)
Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.
2006-01-01
Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with emphasis on the level of energy needed for ignition and the ensuing flame propagation issues. Our focus in the present paper is on identifying the unsteady mixing processes that provide the propellant mixture in which the ignition source is to be placed. In particular, we wish to characterize the spatial and temporal mixture distribution with a view toward identifying preferred spatial and temporal locations for the ignition source. As such, the present work is limited to cold flow (pre-ignition) conditions
Optimal parallel solution of sparse triangular systems
NASA Technical Reports Server (NTRS)
Alvarado, Fernando L.; Schreiber, Robert
1990-01-01
A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Zhang, Hong; Zapol, Peter; Dixon, David A.; ...
2015-11-17
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Zapol, Peter; Dixon, David A.
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
Grider, Gary A.; Poole, Stephen W.
2015-09-01
Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.
A framework for grand scale parallelization of the combined finite discrete element method in 2d
NASA Astrophysics Data System (ADS)
Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.
2014-09-01
Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.
Mixed-methods research in pharmacy practice: recommendations for quality reporting. Part 2.
Hadi, Muhammad Abdul; Alldred, David Phillip; Closs, S José; Briggs, Michelle
2014-02-01
This is the second of two papers that explore the use of mixed-methods research in pharmacy practice. This paper discusses the rationale, applications, limitations and challenges of conducting mixed-methods research. As with other research methods, the choice of mixed-methods should always be justified because not all research questions require a mixed-methods approach. Mixed-methods research is particularly suitable when one dataset may be inadequate in answering the research question, an explanation of initial results is required, generalizability of qualitative findings is desired or broader and deeper understanding of a research problem is necessary. Mixed-methods research has its own challenges and limitations, which should be considered carefully while designing the study. There is a need to improve the quality of reporting of mixed-methods research. A framework for reporting mixed-methods research is proposed, for researchers and reviewers, with the intention of improving its quality. Pharmacy practice research can benefit from research that uses both 'numbers' (quantitative) and 'words' (qualitative) to develop a strong evidence base to support pharmacy-led services. © 2013 Royal Pharmaceutical Society.
NASA Astrophysics Data System (ADS)
Tsai, Cheng-Han; Wu, Xuanye; Kuan, Da-Han; Zimmermann, Stefan; Zengerle, Roland; Koltay, Peter
2018-08-01
In order to culture and analyze individual living cells, microfluidic cultivation and manipulation of cells become an increasingly important topic. Such microfluidic systems allow for exploring the phenotypic differences between thousands of genetically identical cells or pharmacological tests in parallel, which is impossible to achieve by traditional macroscopic cell culture methods. Therefore, plenty of microfluidic systems and devices have been developed for cell biological studies like cell culture, cell sorting, and cell lysis in the past. However, these microfluidic systems are still limited by the external pressure sources which most of the time are large in size and have to be connected by fluidic tubing leading to complex and delicate systems. In order to provide a miniaturized, more robust actuation system a novel, compact and low power consumption digital hydraulic drive (DHD) has been developed that is intended for use in portable and automated microfluidic systems for various applications. The DHD considered in this work consists of a shape memory alloy (SMA) actuator and a pneumatic cylinder. The switching time of the digital modes (pressure ON versus OFF) can be adjusted from 1 s to min. Thus, the DHDs might have many applications for driving microfluidic devices. In this work, different implementations of DHDs are presented and their performance is characterized by experiments. In particular, it will be shown that DHDs can be used for microfluidic large-scale integration (mLSI) valve control (256 valves in parallel) as well as potentially for droplet-based microfluidic systems. As further application example, high-throughput mixing of cell cultures (96 wells in parallel) is demonstrated employing the DHD to drive a so-called ‘functional lid’ (FL), to enable a miniaturized micro bioreactor in a regular 96-well micro well plate.
Nocerino, Elisabetta; Mason, Peter J.; Schwahn, Denise J.; Hetzel, Scott; Turnquist, Alyssa M.; Lee, Fred T.; Brace, Christopher L.
2017-01-01
Purpose To determine how close to the heart pulmonary microwave ablation can be performed without causing cardiac tissue injury or significant arrhythmia. Materials and Methods The study was performed with approval from the institutional animal care and use committee. Computed tomographic fluoroscopically guided microwave ablation of the lung was performed in 12 swine. Antennas were randomized to either parallel (180° ± 20°) or perpendicular (90° ± 20°) orientation relative to the heart surface and to distances of 0–10 mm from the heart. Ablations were performed at 65 W for 5 minutes or until a significant arrhythmia (asystole, heart block, bradycardia, supraventricular or ventricular tachycardia) developed. Heart tissue was evaluated with vital staining and histologic examination. Data were analyzed with mixed effects logistic regression, receiver operating characteristic curves, and the Fisher exact test. Results Thirty-four pulmonary microwave ablations were performed with the antenna a median distance of 4 mm from the heart in both perpendicular (n = 17) and parallel (n = 17) orientation. Significant arrhythmias developed during six (18%) ablations. Cardiac tissue injury occurred with 17 ablations (50%). Risk of arrhythmia and tissue injury decreased with increasing antenna distance from the heart with both antenna orientations. No cardiac complication occurred with a distance of greater than or equal to 4.4 mm from the heart. The ablation zone extended to the pleural surface adjacent to the heart in 71% of parallel and 17% of perpendicular ablations performed 5–10 mm from the heart. Conclusion Microwave lung ablations performed more than or equal to 5 mm from the heart were associated with a low risk of cardiac complications. © RSNA, 2016 PMID:27732159
Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4
2017-12-20
by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods
Using mixed methods research in medical education: basic guidelines for researchers.
Schifferdecker, Karen E; Reed, Virginia A
2009-07-01
Mixed methods research involves the collection, analysis and integration of both qualitative and quantitative data in a single study. The benefits of a mixed methods approach are particularly evident when studying new questions or complex initiatives and interactions, which is often the case in medical education research. Basic guidelines for when to use mixed methods research and how to design a mixed methods study in medical education research are not readily available. The purpose of this paper is to remedy that situation by providing an overview of mixed methods research, research design models relevant for medical education research, examples of each research design model in medical education research, and basic guidelines for medical education researchers interested in mixed methods research. Mixed methods may prove superior in increasing the integrity and applicability of findings when studying new or complex initiatives and interactions in medical education research. They deserve an increased presence and recognition in medical education research.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
NASA Astrophysics Data System (ADS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, T.
Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.
EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.
NASA Technical Reports Server (NTRS)
Wilder, F. D.; Ergun, R. E.; Schwartz, S. J.; Newman, D. L.; Eriksson, S.; Stawarz, J. E.; Goldman, M. V.; Goodrich, K. A.; Gershman, D. J.; Malaspina, D.;
2016-01-01
On 8 September 2015, the four Magnetospheric Multiscale spacecraft encountered a Kelvin-Helmholtz unstable magnetopause near the dusk flank. The spacecraft observed periodic compressed current sheets, between which the plasma was turbulent. We present observations of large-amplitude (up to 100 mVm) oscillations in the electric field. Because these oscillations are purely parallel to the background magnetic field, electrostatic, and below the ion plasma frequency, they are likely to be ion acoustic-like waves. These waves are observed in a turbulent plasma where multiple particle populations are intermittently mixed, including cold electrons with energies less than 10 eV. Stability analysis suggests a cold electron component is necessary for wave growth.
O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon
2007-01-01
Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
Bartholomew, Theodore T; Lockard, Allison J
2018-06-13
Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.
Mixed methods research in music therapy research.
Bradt, Joke; Burns, Debra S; Creswell, John W
2013-01-01
Music therapists have an ethical and professional responsibility to provide the highest quality care possible to their patients. Much of the time, high quality care is guided by evidence-based practice standards that integrate the most current, available research in making decisions. Accordingly, music therapists need research that integrates multiple ways of knowing and forms of evidence. Mixed methods research holds great promise for facilitating such integration. At this time, there have not been any methodological articles published on mixed methods research in music therapy. The purpose of this article is to introduce mixed methods research as an approach to address research questions relevant to music therapy practice. This article describes the core characteristics of mixed methods research, considers paradigmatic issues related to this research approach, articulates major challenges in conducting mixed methods research, illustrates four basic designs, and provides criteria for evaluating the quality of mixed methods articles using examples of mixed methods research from the music therapy literature. Mixed methods research offers unique opportunities for strengthening the evidence base in music therapy. Recommendations are provided to ensure rigorous implementation of this research approach.
O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon
2007-06-14
Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations.
A Mixed Methods Content Analysis of the Research Literature in Science Education
NASA Astrophysics Data System (ADS)
Schram, Asta B.
2014-10-01
In recent years, more and more researchers in science education have been turning to the practice of combining qualitative and quantitative methods in the same study. This approach of using mixed methods creates possibilities to study the various issues that science educators encounter in more depth. In this content analysis, I evaluated 18 studies from science education journals as they relate to the definition, design, and overall practice of using mixed methods. I scrutinized a purposeful sample, derived from 3 journals (the International Journal of Science Education, the Journal of Research in Science Teaching, and the Research in Science Education) in terms of the type of data collected, timing, priority, design, the mixing of the 2 data strands in the studies, and the justifications authors provide for using mixed methods. Furthermore, the articles were evaluated in terms of how well they met contemporary definitions for mixed methods research. The studies varied considerably in the use and understanding of mixed methods. A systematic evaluation of the employment of mixed methods methodology was used to identify the studies that best reflected contemporary definitions. A comparison to earlier content analyses of mixed methods research indicates that researchers' knowledge of mixed methods methodology may be increasing. The use of this strategy in science education research calls, however, for an improved methodology, especially concerning the practice of mixing. Suggestions are given on how to best use this approach.
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
AZTEC: A parallel iterative package for the solving linear systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1996-12-31
We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.
Transformative, Mixed Methods Checklist for Psychological Research with Mexican Americans
ERIC Educational Resources Information Center
Canales, Genevieve
2013-01-01
This is a description of the creation of a research methods tool, the "Transformative, Mixed Methods Checklist for Psychological Research With Mexican Americans." For conducting literature reviews of and planning mixed methods studies with Mexican Americans, it contains evaluative criteria calling for transformative mixed methods, perspectives…
A nonrecursive order N preconditioned conjugate gradient: Range space formulation of MDOF dynamics
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.
1990-01-01
While excellent progress has been made in deriving algorithms that are efficient for certain combinations of system topologies and concurrent multiprocessing hardware, several issues must be resolved to incorporate transient simulation in the control design process for large space structures. Specifically, strategies must be developed that are applicable to systems with numerous degrees of freedom. In addition, the algorithms must have a growth potential in that they must also be amenable to implementation on forthcoming parallel system architectures. For mechanical system simulation, this fact implies that algorithms are required that induce parallelism on a fine scale, suitable for the emerging class of highly parallel processors; and transient simulation methods must be automatically load balancing for a wider collection of system topologies and hardware configurations. These problems are addressed by employing a combination range space/preconditioned conjugate gradient formulation of multi-degree-of-freedom dynamics. The method described has several advantages. In a sequential computing environment, the method has the features that: by employing regular ordering of the system connectivity graph, an extremely efficient preconditioner can be derived from the 'range space metric', as opposed to the system coefficient matrix; because of the effectiveness of the preconditioner, preliminary studies indicate that the method can achieve performance rates that depend linearly upon the number of substructures, hence the title 'Order N'; and the method is non-assembling. Furthermore, the approach is promising as a potential parallel processing algorithm in that the method exhibits a fine parallel granularity suitable for a wide collection of combinations of physical system topologies/computer architectures; and the method is easily load balanced among processors, and does not rely upon system topology to induce parallelism.
The Goddard Space Flight Center Program to develop parallel image processing systems
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1972-01-01
Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
Murray, Linda; Anggrahini, Simplicia Maria; Woda, Rahel Rara; Ayton, Jennifer E; Beggs, Sean
2016-08-01
The eastern Indonesian province of Nusa Tenggara Timur (NTT) has an infant mortality rate of 45 per 1000, higher than the national average (28/1000). Exclusive breastfeeding, important for improving newborn and infant survival, is encouraged among hospitalized infants in Kupang, the provincial capital of NTT. However, barriers to hospitalized infants receiving breast milk may exist. This study explored the barriers and enablers to exclusive breastfeeding among sick and low birth weight hospitalized infants in Kupang, NTT. The attitudes and cultural beliefs of health workers and mothers regarding the use of donor breast milk (DBM) were also explored. A mixed-methods study using a convergent parallel design was conducted. A convenience sample of 74 mothers of hospitalized infants and 8 hospital staff participated in semi-structured interviews. Facility observational data were also collected. Analysis was conducted using Davis's barrier analysis method. Of the 73 questionnaires analyzed, we found that 39.7% of mothers retrospectively reported exclusively breastfeeding and 37% of mothers expressed breast milk. Expressing was associated with maternal reported exclusive breastfeeding χ(2) (1, N = 73) = 6.82, P = .009. Staff supported breastfeeding for sick infants, yet mothers could only access infants during set nursery visiting hours. No mothers used DBM, and most mothers and staff found the concept distasteful. Increasing mothers' opportunities for contact with infants is the first step to increasing exclusive breastfeeding rates among hospitalized infants in Kupang. This will facilitate mothers to express their breast milk, improve the acceptability of DBM, and enhance the feasibility of establishing a DBM bank. © The Author(s) 2016.
2012-01-01
Background Patients with HIV/AIDS on Antiretroviral Therapy (ART) suffer from physical, psychological and spiritual problems. Despite international policy explicitly stating that a multidimensional approach such as palliative care should be delivered throughout the disease trajectory and alongside treatment, the effectiveness of this approach has not been tested in ART-experienced populations. Methods/design This mixed methods study uses a Randomised Controlled Trial (RCT) to test the null hypothesis that receipt of palliative care in addition to standard HIV care does not affect pain compared to standard care alone. An additional qualitative component will explore the mechanism of action and participant experience. The sample size is designed to detect a statistically significant decrease in reported pain, determined by a two tailed test and a p value of ≤0.05. Recruited patients will be adults on ART for more than one month, who report significant pain or symptoms which have lasted for more than two weeks (as measured by the African Palliative Care Association (APCA) African Palliative Outcome Scale (POS)). The intervention under trial is palliative care delivered by an existing HIV facility nurse trained to a set standard. Following an initial pilot the study will be delivered in two African countries, using two parallel independent Phase III clinical RCTs. Qualitative data will be collected from semi structured interviews and documentation from clinical encounters, to explore the experience of receiving palliative care in this context. Discussion The data provided by this study will provide evidence to inform the improvement of outcomes for people living with HIV and on ART in Africa. ClinicalTrials.gov Identifier: NCT01608802 PMID:23130740
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.
Newcomer immigrant adolescents: A mixed-methods examination of family stressors and school outcomes.
Patel, Sita G; Clarke, Annette V; Eltareb, Fazia; Macciomei, Erynn E; Wickham, Robert E
2016-06-01
Family stressors predict negative psychological outcomes for immigrant adolescents, yet little is known about how such stressors interact to predict school outcomes. The purpose of this study was to explore the interactive role of family stressors on school outcomes for newcomer adolescent immigrants. Using a convergent parallel mixed-methods design, we used quantitative methods to explore interactions between family separation, acculturative family conflict, and family life events to predict 2 school outcomes, academic achievement (via grade point average [GPA]), and externalizing problems (student- and teacher-reported). The sample included 189 newcomer immigrant public high school students from 34 countries of origin. Quantitative measures included the Multicultural Events Scale for Adolescents, Family Conflicts Scale, and the Achenbach System of Empirically Based Assessment (ASEBA). Qualitative data were collected through a semi-structured interview. Quantitative results found that more family life events were associated with lower GPA, but this association was weaker for participants who had been separated from their parents. More family conflict was associated with more externalizing symptoms (both youth- and teacher-reported). However, the association between family conflict and teacher-reported externalizing symptoms was found only among participants reporting a greater than average number of life events. Qualitative results show that separation from extended family networks was among the most stressful of experiences, and demonstrate the highly complex nature of each family stressor domain. At a time when immigration is rapidly changing our school system, a better understanding of early risk factors for new immigrants can help teachers, administrators, and mental health practitioners to identify students with greatest need to foster behavioral, academic, and emotional well-being. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mitchell, Lauren L; Peterson, Colleen M; Rud, Shaina R; Jutkowitz, Eric; Sarkinen, Andrielle; Trost, Sierra; Porta, Carolyn M; Finlay, Jessica M; Gaugler, Joseph E
2018-03-01
Technologies have emerged that aim to help older persons with Alzheimer's disease and related dementias (ADRDs) remain at home while also supporting their caregiving family members. However, the usefulness of these innovations, particularly in home-based care contexts, remains underexplored. The current study evaluated the acceptability and utility of an in-home remote activity monitoring (RAM) system for 30 family caregivers of persons with ADRD via quantitative survey data collected over a 6-month period and qualitative survey and interview data collected for up to 18 months. A parallel convergent mixed methods design was employed. The integrated qualitative and quantitative data suggested that RAM technology offered ongoing monitoring and provided caregivers with a sense of security. Considerable customization was needed so that RAM was most appropriate for persons with ADRD. The findings have important clinical implications when considering how RAM can supplement, or potentially substitute for, ADRD family care.
Hamel, Aimee V.; Sims, Tai L.; Klassen, Dan; Havey, Thomas; Gaugler, Joseph E.
2017-01-01
Reminiscence interventions are potentially effective in improving well-being of persons with memory loss (PWMLs) and may also enhance relationships with family and professional caregivers. Using a parallel convergent mixed-methods design, the feasibility of “Memory Matters” (MM), a mobile device application developed to promote reminiscence, was evaluated. Eighteen PWMLs and eight family members were enrolled from a long-term care facility and asked to use MM for 4 weeks. Participants were observed using MM at enrollment and 2 weeks and completed 1-month interviews. Six staff participants also completed a system review checklist and/or focus group at 1 month. Three qualitative domains were identified: (a) context of use, (b) barriers to use, and (c) MM influences on outcomes. Participants reported real-time social engagement, ease of use, and other benefits. However, PWMLs were unlikely to overcome barriers without assistance. Empirical data indicated that family and staff perceived MM favorably. Participants agreed that MM could provide stimulating, reminiscence-based activity. PMID:26870986
The Relation Between Lung Dust and Lung Pathology in Pneumoconiosis*
Nagelschmidt, G.
1960-01-01
Methods of isolation and analysis of dust from pneumoconiotic lungs are reviewed, and the results of lung dust analyses for different forms of pneumoconiosis are presented. A tentative classification separates beryllium, aluminium, abrasive fume, and asbestos, which cause interstitial or disseminated fibrosis from quartz, coal, haematite, talc, kaolin, and other dusts, which cause a nodular or focal fibrosis which may change to forms with massive lesions. The data suggest that in the first, but not in the second, group the dusts are relatively soluble; only in the second group do amounts of dust and severity of fibrosis go in parallel for a given form of pneumoconiosis. In classical silicosis the quartz percentage is higher and the amount of total dust much lower than in coal-miners' pneumoconiosis. Mixed forms of both groups occur, for instance, in diatomite workers. The need for more research, especially in the first group, is pointed out. Images PMID:13727444
Crack Front Segmentation and Facet Coarsening in Mixed-Mode Fracture
NASA Astrophysics Data System (ADS)
Chen, Chih-Hung; Cambonie, Tristan; Lazarus, Veronique; Nicoli, Matteo; Pons, Antonio J.; Karma, Alain
2015-12-01
A planar crack generically segments into an array of "daughter cracks" shaped as tilted facets when loaded with both a tensile stress normal to the crack plane (mode I) and a shear stress parallel to the crack front (mode III). We investigate facet propagation and coarsening using in situ microscopy observations of fracture surfaces at different stages of quasistatic mixed-mode crack propagation and phase-field simulations. The results demonstrate that the bifurcation from propagating a planar to segmented crack front is strongly subcritical, reconciling previous theoretical predictions of linear stability analysis with experimental observations. They further show that facet coarsening is a self-similar process driven by a spatial period-doubling instability of facet arrays.
Maudsley, Gillian
2011-01-01
Some important research questions in medical education and health services research need 'mixed methods research' (particularly synthesizing quantitative and qualitative findings). The approach is not new, but should be more explicitly reported. The broad search question here, of a disjointed literature, was thus: What is mixed methods research - how should it relate to medical education research?, focused on explicit acknowledgement of 'mixing'. Literature searching focused on Web of Knowledge supplemented by other databases across disciplines. Five main messages emerged: - Thinking quantitative and qualitative, not quantitative versus qualitative - Appreciating that mixed methods research blends different knowledge claims, enquiry strategies, and methods - Using a 'horses for courses' [whatever works] approach to the question, and clarifying the mix - Appreciating how medical education research competes with the 'evidence-based' movement, health services research, and the 'RCT' - Being more explicit about the role of mixed methods in medical education research, and the required expertise Mixed methods research is valuable, yet the literature relevant to medical education is fragmented and poorly indexed. The required time, effort, expertise, and techniques deserve better recognition. More write-ups should explicitly discuss the 'mixing' (particularly of findings), rather than report separate components.
Effect of Preheating on the Inertia Friction Welding of the Dissimilar Superalloys Mar-M247 and LSHR
NASA Astrophysics Data System (ADS)
Senkov, O. N.; Mahaffey, D. W.; Semiatin, S. L.
2016-12-01
Differences in the elevated temperature mechanical properties of cast Mar-M247 and forged LSHR make it difficult to produce sound joints of these alloys by inertia friction welding (IFW). While extensive plastic upset occurs on the LSHR side, only a small upset is typically developed on the Mar-M247 side. The limited plastic flow of Mar-M247 thus restricts the extent of "self-cleaning" and mechanical mixing of the mating surfaces, so that defects remain at the bond line after welding. In the present work, the effect of local preheating of Mar-M247 immediately prior to IFW on the welding behavior of Mar-M247/LSHR couples was determined. An increase in the preheat temperature enhanced the plastic flow of Mar-M247 during IFW, which resulted in extensive mechanical mixing with LSHR at the weld interface, the formation of extensive flash on both the Mar-M247 and LSHR sides, and a sound bond. Performed in parallel with the experimental work, finite-element-method (FEM) simulations showed that higher temperatures are achieved within the preheated sample during IFW relative to its non-preheated counterpart, and plastic flow is thus facilitated within it. Microstructure and post-weld mechanical properties of the welded samples were also established.
NASA Astrophysics Data System (ADS)
Enomoto, Ayano; Hirata, Hiroshi
2014-02-01
This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.
ERIC Educational Resources Information Center
Er, Harun
2017-01-01
The aim of this study is to evaluate the opinions of social studies teacher candidates on use of biography in science, technology and social change course given in the undergraduate program of social studies education. In this regard, convergent parallel design as a mixed research pattern was used to make use of both qualitative and quantitative…
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
The frequency and level of sweep in mixed hardwood saw logs in the eastern United States
Peter Hamner; Marshall S. White; Philip A. Araman
2007-01-01
Hardwood sawmills traditionally saw logs in a manner that either orients sawlines parallel to the log central axis (straight sawing) or the log surface (allowing for taper). Sweep is characterized as uniform curvature along the entire length of a log. For logs with sweep, lumber yield losses from straight and taper sawing increase with increasing levels of sweep. Curve...
Superquantile/CVaR Risk Measures: Second-Order Theory
2014-07-17
order version of quantile regression . Keywords: superquantiles, conditional value-at-risk, second-order superquantiles, mixed superquan- tiles... quantile regression . 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 26 19a...second-order superquantiles is in the domain of generalized regression . We laid out in [16] a parallel methodology to that of quantile regression
An experimental study on the numbering-up of microchannels for liquid mixing.
Su, Yuanhai; Chen, Guangwen; Kenig, Eugeny Y
2015-01-07
The numbering-up of zigzag-form microchannels for liquid mixing was experimentally investigated in a multichannel micromixer including 8 parallel channels, based on the Villermaux-Dushman reaction system, with an appropriate sulphuric acid concentration. The results showed that the micromixing performance in such micromixers could reach the same quality as in a single microchannel, when flat constructal distributors with bifurcation configurations were used. The mixing performance did not depend on whether a vertical or horizontal micromixer position was selected. Surprisingly, the channel blockage somewhat increased the micromixing performance in the multichannel micromixer due to the fluid redistribution effect of the constructal distributors. This effect could also be confirmed by CFD simulations. However, the channel blockage resulted in a higher pressure drop and thus higher specific energy dissipation in the multichannel micromixer. The local pressure drop caused by fluid splitting and re-combination in the numbering-up technique could be neglected at low Reynolds numbers, but it became larger with increasing flow rates. The operational zone for the mixing process in multichannel micromixers was sub-divided into two parts according to the specific energy dissipation and the mixing mechanisms.
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
Turbulent Mixing in Gravity Currents with Transverse Shear
NASA Astrophysics Data System (ADS)
White, Brian; Helfrich, Karl; Scotti, Alberto
2010-11-01
A parallel flow with horizontal shear and horizontal density gradient undergoes an intensification of the shear by gravitational tilting and stretching, rapidly breaking down into turbulence. Such flows have the potential for substantial mixing in estuaries and the coastal ocean. We present high-resolution numerical results for the mixing efficiency of these flows, which can be viewed as gravity currents with transverse shear, and contrast them with the well-studied case of stably stratified, homogeneous turbulence (uniform vertical density and velocity gradients). For a sheared gravity current, the buoyancy flux, turbulent Reynolds stress, and dissipation are well out of equilibrium. The total kinetic energy first increases as potential energy is transferred to the gravity current, but rapidly decays once turbulence sets in. Despite the non-equilibrium character, mixing efficiencies are slightly higher but qualitatively similar to homogeneous stratified turbulence. Efficiency decreases in the highly energetic regime where the dissipation rate is large compared with viscosity and stratification, ɛ/(νN^2)>100, further declining as turbulence decays and kinetic energy dissipation dominates the buoyancy flux. In general, the mixing rate, parameterized by a turbulent eddy diffusivity, increases with the strength of the transverse shear.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Efficient parallel implicit methods for rotary-wing aerodynamics calculations
NASA Astrophysics Data System (ADS)
Wissink, Andrew M.
Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Addressing Research Design Problem in Mixed Methods Research
NASA Astrophysics Data System (ADS)
Alavi, Hamed; Hąbek, Patrycja
2016-03-01
Alongside other disciplines in social sciences, management researchers use mixed methods research more and more in conduct of their scientific investigations. Mixed methods approach can also be used in the field of production engineering. In comparison with traditional quantitative and qualitative research methods, reasons behind increasing popularity of mixed research method in management science can be traced in different factors. First of all, any particular discipline in management can be theoretically related to it. Second is that concurrent approach of mixed research method to inductive and deductive research logic provides researchers with opportunity to generate theory and test hypothesis in one study simultaneously. In addition, it provides a better justification for chosen method of investigation and higher validity for obtained answers to research questions. Despite increasing popularity of mixed research methods among management scholars, there is still need for a comprehensive approach to research design typology and process in mixed research method from the perspective of management science. The authors in this paper try to explain fundamental principles of mixed research method, its typology and different steps in its design process.
Pragmatism, Evidence, and Mixed Methods Evaluation
ERIC Educational Resources Information Center
Hall, Jori N.
2013-01-01
Mixed methods evaluation has a long-standing history of enhancing the credibility of evaluation findings. However, using mixed methods in a utilitarian way implicitly emphasizes convenience over engaging with its philosophical underpinnings (Denscombe, 2008). Because of this, some mixed methods evaluators and social science researchers have been…
Structural issues affecting mixed methods studies in health research: a qualitative study.
O'Cathain, Alicia; Nicholl, Jon; Murphy, Elizabeth
2009-12-09
Health researchers undertake studies which combine qualitative and quantitative methods. Little attention has been paid to the structural issues affecting this mixed methods approach. We explored the facilitators and barriers to undertaking mixed methods studies in health research. Face-to-face semi-structured interviews with 20 researchers experienced in mixed methods research in health in the United Kingdom. Structural facilitators for undertaking mixed methods studies included a perception that funding bodies promoted this approach, and the multidisciplinary constituency of some university departments. Structural barriers to exploiting the potential of these studies included a lack of education and training in mixed methods research, and a lack of templates for reporting mixed methods articles in peer-reviewed journals. The 'hierarchy of evidence' relating to effectiveness studies in health care research, with the randomised controlled trial as the gold standard, appeared to pervade the health research infrastructure. Thus integration of data and findings from qualitative and quantitative components of mixed methods studies, and dissemination of integrated outputs, tended to occur through serendipity and effort, further highlighting the presence of structural constraints. Researchers are agents who may also support current structures - journal reviewers and editors, and directors of postgraduate training courses - and thus have the ability to improve the structural support for exploiting the potential of mixed methods research. The environment for health research in the UK appears to be conducive to mixed methods research but not to exploiting the potential of this approach. Structural change, as well as change in researcher behaviour, will be necessary if researchers are to fully exploit the potential of using mixed methods research.
Reliability of a Parallel Pipe Network
NASA Technical Reports Server (NTRS)
Herrera, Edgar; Chamis, Christopher (Technical Monitor)
2001-01-01
The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.
Methods for operating parallel computing systems employing sequenced communications
Benner, R.E.; Gustafson, J.L.; Montry, G.R.
1999-08-10
A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.
Methods for operating parallel computing systems employing sequenced communications
Benner, Robert E.; Gustafson, John L.; Montry, Gary R.
1999-01-01
A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
The use of "mixing" procedure of mixed methods in health services research.
Zhang, Wanqing; Creswell, John
2013-08-01
Mixed methods research has emerged alongside qualitative and quantitative approaches as an important tool for health services researchers. Despite growing interest, among health services researchers, in using mixed methods designs, little has been done to identify the procedural aspects of doing so. To describe how mixed methods researchers mix the qualitative and quantitative aspects of their studies in health services research. We searched the PubMed for articles, using mixed methods in health services research, published between January 1, 2006 and December 30, 2010. We identified and reviewed 30 published health services research articles on studies in which mixed methods had been used. We selected 3 articles as illustrations to help health services researcher conceptualize the type of mixing procedures that they were using. Three main "mixing" procedures have been applied within these studies: (1) the researchers analyzed the 2 types of data at the same time but separately and integrated the results during interpretation; (2) the researchers connected the qualitative and quantitative portions in phases in such a way that 1 approach was built upon the findings of the other approach; and (3) the researchers mixed the 2 data types by embedding the analysis of 1 data type within the other. "Mixing" in mixed methods is more than just the combination of 2 independent components of the quantitative and qualitative data. The use of "mixing" procedure in health services research involves the integration, connection, and embedding of these 2 data components.
Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope
NASA Astrophysics Data System (ADS)
Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.
2007-01-01
X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.
New scheme for image edge detection using the switching mechanism of nonlinear optical material
NASA Astrophysics Data System (ADS)
Pahari, Nirmalya; Mukhopadhyay, Sourangshu
2006-03-01
The limitations of electronics in conducting parallel arithmetic, algebraic, and logic processing are well known. Very high-speed (terahertz) performance cannot be expected in conventional electronic mechanisms. To achieve such performance we can introduce optics instead of electronics for information processing, computing, and data handling. Nonlinear optical material (NOM) is a successful candidate in this regard to play a major role in the domain of optically controlled switching systems. The character of some NOMs is such as to reflect the probe beam in the presence of two read beams (or pump beams) exciting the material from opposite directions, using the principle of four-wave mixing. In image processing, edge extraction from an image is an important and essential task. Several optical methods of digital image processing are used for properly evaluating the image edges. We propose here a new method of image edge detection, extraction, and enhancement by use of AND-based switching operations with NOM. In this process we have used the optically inverted image of a supplied image. This can be obtained by the EXOR switching operation of the NOM.
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.
NASA Astrophysics Data System (ADS)
Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.
2013-05-01
Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in-situ image cloud particles in a well defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.
NASA Astrophysics Data System (ADS)
Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.
2013-11-01
Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in situ image cloud particles in a well-defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.
Towards enhancing and delaying disturbances in free shear flows
NASA Technical Reports Server (NTRS)
Criminale, W. O.; Jackson, T. L.; Lasseigne, D. G.
1994-01-01
The family of shear flows comprising the jet, wake, and the mixing layer are subjected to perturbations in an inviscid incompressible fluid. By modeling the basic mean flows as parallel with piecewise linear variations for the velocities, complete and general solutions to the linearized equations of motion can be obtained in closed form as functions of all space variables and time when posed as an initial value problem. The results show that there is a continuous as well as the discrete spectrum that is more familiar in stability theory and therefore there can be both algebraic and exponential growth of disturbances in time. These bases make it feasible to consider control of such flows. To this end, the possibility of enhancing the disturbances in the mixing layer and delaying the onset in the jet and wake is investigated. It is found that growth of perturbations can be delayed to a considerable degree for the jet and the wake but, by comparison, cannot be enhanced in the mixing layer. By using moving coordinates, a method for demonstrating the predominant early and long time behavior of disturbances in these flows is given for continuous velocity profiles. It is shown that the early time transients are always algebraic whereas the asymptotic limit is that of an exponential normal mode. Numerical treatment of the new governing equations confirm the conclusions reached by use of the piecewise linear basic models. Although not pursued here, feedback mechanisms designed for control of the flow could be devised using the results of this work.
Structural issues affecting mixed methods studies in health research: a qualitative study
2009-01-01
Background Health researchers undertake studies which combine qualitative and quantitative methods. Little attention has been paid to the structural issues affecting this mixed methods approach. We explored the facilitators and barriers to undertaking mixed methods studies in health research. Methods Face-to-face semi-structured interviews with 20 researchers experienced in mixed methods research in health in the United Kingdom. Results Structural facilitators for undertaking mixed methods studies included a perception that funding bodies promoted this approach, and the multidisciplinary constituency of some university departments. Structural barriers to exploiting the potential of these studies included a lack of education and training in mixed methods research, and a lack of templates for reporting mixed methods articles in peer-reviewed journals. The 'hierarchy of evidence' relating to effectiveness studies in health care research, with the randomised controlled trial as the gold standard, appeared to pervade the health research infrastructure. Thus integration of data and findings from qualitative and quantitative components of mixed methods studies, and dissemination of integrated outputs, tended to occur through serendipity and effort, further highlighting the presence of structural constraints. Researchers are agents who may also support current structures - journal reviewers and editors, and directors of postgraduate training courses - and thus have the ability to improve the structural support for exploiting the potential of mixed methods research. Conclusion The environment for health research in the UK appears to be conducive to mixed methods research but not to exploiting the potential of this approach. Structural change, as well as change in researcher behaviour, will be necessary if researchers are to fully exploit the potential of using mixed methods research. PMID:20003210
Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen
2012-04-30
Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
System-wide effects of Global Fund investments in Nepal.
Trägård, Anna; Shrestha, Ishwar Bahadur
2010-11-01
Nepal, with a concentrated HIV epidemic and high burden of tuberculosis (TB) and malaria, was perceived to have immensely benefited from grants by the Global Fund to Fight AIDS, Tuberculosis and Malaria in addressing the three diseases, amounting to total approved funding of US$80 million. This paper looks at the interaction and integration of Global Fund-supported programmes and national health systems. A mixed method 'case study' approach based on the Systemic Rapid Assessment Toolkit (SYSRA) was used to systematically analyse across the main health systems functional domains. The Country Coordinating Mechanism has been credited with providing the stewardship in attracting additional resources and providing oversight. The involvement of civil society for delivering key HIV and malaria interventions targeting high-risk groups was perceived to be highly beneficial. TB and malaria services were found to be well integrated into the public health care delivery system, while HIV services targeting at-risk groups were often delivered using parallel structures. Political instability, absence of continuity in leadership and sub-optimal investments in health were together perceived to have led to fragmentation of financing and planning activities, especially in HIV the programme. The demand for timely programmatic and financial reporting for donor-supported programmes has contributed to the creation of parallel monitoring and evaluation structures, with missed opportunities for strengthening and utilizing the national health management information systems.
The Casimir effect for parallel plates revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.
2007-10-15
The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less
LFRic: Building a new Unified Model
NASA Astrophysics Data System (ADS)
Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike
2017-04-01
The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.
Guetterman, Timothy C; Creswell, John W; Wittink, Marsha; Barg, Fran K; Castro, Felipe G; Dahlberg, Britt; Watkins, Daphne C; Deutsch, Charles; Gallo, Joseph J
2017-01-01
Demand for training in mixed methods is high, with little research on faculty development or assessment in mixed methods. We describe the development of a self-rated mixed methods skills assessment and provide validity evidence. The instrument taps six research domains: "Research question," "Design/approach," "Sampling," "Data collection," "Analysis," and "Dissemination." Respondents are asked to rate their ability to define or explain concepts of mixed methods under each domain, their ability to apply the concepts to problems, and the extent to which they need to improve. We administered the questionnaire to 145 faculty and students using an internet survey. We analyzed descriptive statistics and performance characteristics of the questionnaire using the Cronbach alpha to assess reliability and an analysis of variance that compared a mixed methods experience index with assessment scores to assess criterion relatedness. Internal consistency reliability was high for the total set of items (0.95) and adequate (≥0.71) for all but one subscale. Consistent with establishing criterion validity, respondents who had more professional experiences with mixed methods (eg, published a mixed methods article) rated themselves as more skilled, which was statistically significant across the research domains. This self-rated mixed methods assessment instrument may be a useful tool to assess skills in mixed methods for training programs. It can be applied widely at the graduate and faculty level. For the learner, assessment may lead to enhanced motivation to learn and training focused on self-identified needs. For faculty, the assessment may improve curriculum and course content planning.
Pluye, Pierre; Légaré, France; Haggerty, Jeannie; Gore, Genevieve C; Sherif, Reem El; Poitras, Marie-Ève; Beaulieu, Marie-Claude; Beaulieu, Marie-Dominique; Bush, Paula L; Couturier, Yves; Débarges, Béatrice; Gagnon, Justin; Giguère, Anik; Grad, Roland; Granikov, Vera; Goulet, Serge; Hudon, Catherine; Kremer, Bernardo; Kröger, Edeltraut; Kudrina, Irina; Lebouché, Bertrand; Loignon, Christine; Lussier, Marie-Thérèse; Martello, Cristiano; Nguyen, Quynh; Pratt, Rebekah; Rihoux, Benoit; Rosenberg, Ellen; Samson, Isabelle; Senn, Nicolas; Li Tang, David; Tsujimoto, Masashi; Vedel, Isabelle; Ventelou, Bruno; Wensing, Michel; Bigras, Magali
2017-01-01
Introduction Patients with complex care needs (PCCNs) often suffer from combinations of multiple chronic conditions, mental health problems, drug interactions and social vulnerability, which can lead to healthcare services overuse, underuse or misuse. Typically, PCCNs face interactional issues and unmet decisional needs regarding possible options in a cascade of interrelated decisions involving different stakeholders (themselves, their families, their caregivers, their healthcare practitioners). Gaps in knowledge, values clarification and social support in situations where options need to be deliberated hamper effective decision support interventions. This review aims to (1) assess decisional needs of PCCNs from the perspective of stakeholders, (2) build a taxonomy of these decisional needs and (3) prioritise decisional needs with knowledge users (clinicians, patients and managers). Methods and analysis This review will be based on the interprofessional shared decision making (IP-SDM) model and the Ottawa Decision Support Framework. Applying a participatory research approach, we will identify potentially relevant studies through a comprehensive literature search; select relevant ones using eligibility criteria inspired from our previous scoping review on PCCNs; appraise quality using the Mixed Methods Appraisal Tool; conduct a three-step synthesis (sequential exploratory mixed methods design) to build taxonomy of key decisional needs; and integrate these results with those of a parallel PCCNs’ qualitative decisional need assessment (semistructured interviews and focus group with stakeholders). Ethics and dissemination This systematic review, together with the qualitative study (approved by the Centre Intégré Universitaire de Santé et Service Sociaux du Saguenay-Lac-Saint-Jean ethical committee), will produce a working taxonomy of key decisional needs (ontological contribution), to inform the subsequent user-centred design of a support tool for addressing PCCNs’ decisional needs (practical contribution). We will adapt the IP-SDM model, normally dealing with a single decision, for PCCNs who experience cascade of decisions involving different stakeholders (theoretical contribution). Knowledge users will facilitate dissemination of the results in the Canadian primary care network. PROSPERO registration number CRD42015020558. PMID:29133314
Taylor, Francesca; Taylor, Celia; Baharani, Jyoti; Nicholas, Johann; Combes, Gill
2016-08-02
As a result of difficulties related to their illness, diagnosis and treatment, patients with end-stage renal disease experience significant emotional and psychological problems, which untreated can have considerable negative impact on their health and wellbeing. Despite evidence that patients desire improved support, management of their psychosocial problems, particularly at the lower-level, remains sub-optimal. There is limited understanding of the specific support that patients need and want, from whom, and when, and also a lack of data on what helps and hinders renal staff in identifying and responding to their patients' support needs, and how barriers to doing so might be overcome. Through this research we therefore seek to determine what, when, and how, support for patients with lower-level emotional and psychological problems should be integrated into the end-stage renal disease pathway. The research will involve two linked, multicentre studies, designed to identify and consider the perspectives of patients at five different stages of the end-stage renal disease pathway (Study 1), and renal staff working with them (Study 2). A convergent, parallel mixed methods design will be employed for both studies, with quantitative and qualitative data collected separately. For each study, the data sets will be analysed separately and the results then compared or combined using interpretive analysis. A further stage of synthesis will employ data-driven thematic analysis to identify: triangulation and frequency of themes across pathway stages; patterns and plausible explanations of effects. There is an important need for this research given the high frequency of lower-level distress experienced by end-stage renal disease patients and lack of progress to date in integrating support for their lower-level psychosocial needs into the care pathway. Use of a mixed methods design across the two studies will generate a holistic patient and healthcare professional perspective that is more likely to identify viable solutions to enable implementation of timely and integrated care. Based on the research outputs, appropriate support interventions will be developed, implemented and evaluated in a linked follow-on study.
Becares, Laia; Nazroo, James
2013-01-01
Ethnic minority people have been suggested to be healthier when living in areas with a higher concentration of people from their own ethnic group, a so-called ethnic density effect. Explanations behind the ethnic density effect propose that positive health outcomes are partially attributed to the protective and buffering effects of increased social capital on health. In fact, a parallel literature has reported increased levels of social capital in areas of greater ethnic residential diversity, but to date, no study in England has explored whether increased social capital mediates the relationship between protective effects attributed to the residential concentration of ethnic minority groups and health. We employ a mixed-methods approach to examine the association between ethnicity, social capital and mental health. We analyse geocoded data from the 2004 Health Survey for England to examine the association between (1) ethnic residential concentration and health; (2) ethnic residential concentration and social capital; (3) social capital and health; and (4) the mediating effect of social capital on the association between the residential concentration of ethnic groups and health. To further add to our understanding of the processes involved, data from a qualitative study of quality older ethnic minority people were be used to examine accounts of the significance of place of residence to quality of life. The association between ethnic density and social capital varies depending on the level of measurement of social capital and differed across ethnic minority groups. Social capital was not found to mediate the association between ethnic density and health. Structural differences in the characteristics of the neighbourhoods where different ethnic groups reside are reflected in the accounts of their daily experiences, and we observed different narratives of neighbourhood experiences between Indian and Caribbean respondents. The use of mixed methods provides an important contribution to the study of ethnic minority people's experience of their neighbourhood, as this approach has allowed us to gain important insights that cannot be inferred from quantitative or qualitative data alone.
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.
2017-01-01
Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613
Parallel imaging of knee cartilage at 3 Tesla.
Zuo, Jin; Li, Xiaojuan; Banerjee, Suchandrima; Han, Eric; Majumdar, Sharmila
2007-10-01
To evaluate the feasibility and reproducibility of quantitative cartilage imaging with parallel imaging at 3T and to determine the impact of the acceleration factor (AF) on morphological and relaxation measurements. An eight-channel phased-array knee coil was employed for conventional and parallel imaging on a 3T scanner. The imaging protocol consisted of a T2-weighted fast spin echo (FSE), a 3D-spoiled gradient echo (SPGR), a custom 3D-SPGR T1rho, and a 3D-SPGR T2 sequence. Parallel imaging was performed with an array spatial sensitivity technique (ASSET). The left knees of six healthy volunteers were scanned with both conventional and parallel imaging (AF = 2). Morphological parameters and relaxation maps from parallel imaging methods (AF = 2) showed comparable results with conventional method. The intraclass correlation coefficient (ICC) of the two methods for cartilage volume, mean cartilage thickness, T1rho, and T2 were 0.999, 0.977, 0.964, and 0.969, respectively, while demonstrating excellent reproducibility. No significant measurement differences were found when AF reached 3 despite the low signal-to-noise ratio (SNR). The study demonstrated that parallel imaging can be applied to current knee cartilage quantification at AF = 2 without degrading measurement accuracy with good reproducibility while effectively reducing scan time. Shorter imaging times can be achieved with higher AF at the cost of SNR. (c) 2007 Wiley-Liss, Inc.
Mixed Methods in Emerging Academic Subdisciplines: The Case of Sport Management
ERIC Educational Resources Information Center
van der Roest, Jan-Willem; Spaaij, Ramón; van Bottenburg, Maarten
2015-01-01
This article examines the prevalence and characteristics of mixed methods research in the relatively new subdiscipline of sport management. A mixed methods study is undertaken to evaluate the epistemological/philosophical, methodological, and technical levels of mixed methods design in sport management research. The results indicate that mixed…
Qualitative Approaches to Mixed Methods Practice
ERIC Educational Resources Information Center
Hesse-Biber, Sharlene
2010-01-01
This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced…
Educational Accountability: A Qualitatively Driven Mixed-Methods Approach
ERIC Educational Resources Information Center
Hall, Jori N.; Ryan, Katherine E.
2011-01-01
This article discusses the importance of mixed-methods research, in particular the value of qualitatively driven mixed-methods research for quantitatively driven domains like educational accountability. The article demonstrates the merits of qualitative thinking by describing a mixed-methods study that focuses on a middle school's system of…
An M-step preconditioned conjugate gradient method for parallel computation
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.
ERIC Educational Resources Information Center
Uzunöz, Abdulkadir
2018-01-01
The purpose of this study is to identify the conceptual mistakes frequently encountered in teaching geography such as latitude-parallel concepts, and to prepare conceptual change text based on the Scientific Storyline Method, in order to resolve the identified misconceptions. In this study, the special case method, which is one of the qualitative…
USDA-ARS?s Scientific Manuscript database
A method is demonstrated for analysis of vitamin D-fortified dietary supplements that eliminates virtually all chemical pretreatment prior to analysis, and is referred to as a ‘dilute and shoot’ method. Three mass spectrometers, in parallel, plus a UV detector, an evaporative light scattering detec...
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
Using mixed methods to develop and evaluate complex interventions in palliative care research.
Farquhar, Morag C; Ewing, Gail; Booth, Sara
2011-12-01
there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Li, Dangdang; Zhang, Shasha; Song, Zehua; Wang, Guotong; Li, Shengkun
2017-08-18
The bioactivity-guided mixed synthesis was conceived, in which the designed mix-reactions were run in parallel for simultaneous construction of different kinds of analogs. The valuable ones were protruded by biological screening. This tactic will facilitate more rapid incorporation of bioactive candidates into pesticide chemists' repertoire, exemplified by the optimization of less explored homodrimanes as antifungal ingredients. The discovery of D9 as a potent fungicidal agent can be completed in <2 weeks by one student, with EC 50 of 3.33 mg/L and 2.45 mg/L against S. sclerotiorum and B. cinerea, respectively. To confirm the practicability, time-efficiency, and reliability, specific homodrimanes (82 derivatives) were synthesized and elucidated separately and determined for EC 50 values. The SAR correlated well with the intentionally mixed synthesis and the potential was further confirmed by the in vivo bioassay. This methodology will foster more efficient exploration of biologically relevant chemical space of natural products in pesticide discovery, and can also be tailored readily for the lead optimization in medicinal chemistry. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Improved silicon carbide for advanced heat engines
NASA Technical Reports Server (NTRS)
Whalen, Thomas J.; Mangels, J. A.
1986-01-01
The development of silicon carbide materials of high strength was initiated and components of complex shape and high reliability were formed. The approach was to adapt a beta-SiC powder and binder system to the injection molding process and to develop procedures and process parameters capable of providing a sintered silicon carbide material with improved properties. The initial effort was to characterize the baseline precursor materials, develop mixing and injection molding procedures for fabricating test bars, and characterize the properties of the sintered materials. Parallel studies of various mixing, dewaxing, and sintering procedures were performed in order to distinguish process routes for improving material properties. A total of 276 modulus-of-rupture (MOR) bars of the baseline material was molded, and 122 bars were fully processed to a sinter density of approximately 95 percent. Fluid mixing techniques were developed which significantly reduced flaw size and improved the strength of the material. Initial MOR tests indicated that strength of the fluid-mixed material exceeds the baseline property by more than 33 percent. the baseline property by more than 33 percent.
Hidden axion dark matter decaying through mixing with QCD axion and the 3.5 keV X-ray line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higaki, Tetsutaro; Kitajima, Naoya; Takahashi, Fuminobu, E-mail: thigaki@post.kek.jp, E-mail: kitajima@tuhep.phys.tohoku.ac.jp, E-mail: fumi@tuhep.phys.tohoku.ac.jp
2014-12-01
Hidden axions may be coupled to the standard model particles through a kinetic or mass mixing with QCD axion. We study a scenario in which a hidden axion constitutes a part of or the whole of dark matter and decays into photons through the mixing, explaining the 3.5 keV X-ray line signal. Interestingly, the required long lifetime of the hidden axion dark matter can be realized for the QCD axion decay constant at an intermediate scale, if the mixing is sufficiently small. In such a two component dark matter scenario, the primordial density perturbations of the hidden axion can bemore » highly non-Gaussian, leading to a possible dispersion in the X-ray line strength from various galaxy clusters and near-by galaxies. We also discuss how the parallel and orthogonal alignment of two axions affects their couplings to gauge fields. In particular, the QCD axion decay constant can be much larger than the actual Peccei-Quinn symmetry breaking.« less
Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P
2015-09-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.