The Threat of Unexamined Secondary Data: A Critical Race Transformative Convergent Mixed Methods
ERIC Educational Resources Information Center
Garcia, Nichole M.; Mayorga, Oscar J.
2018-01-01
This article uses a critical race theory framework to conceptualize a Critical Race Transformative Convergent Mixed Methods (CRTCMM) in education. CRTCMM is a methodology that challenges normative educational research practices by acknowledging that racism permeates educational institutions and marginalizes Communities of Color. The focus of this…
Being Outside Learning about Science Is Amazing: A Mixed Methods Study
ERIC Educational Resources Information Center
Weibel, Michelle L.
2011-01-01
This study used a convergent parallel mixed methods design to examine teachers' environmental attitudes and concerns about an outdoor educational field trip. Converging both quantitative data (Environmental Attitudes Scale and teacher demographics) and qualitative data (Open-Ended Statements of Concern and interviews) facilitated interpretation.…
ERIC Educational Resources Information Center
Kerrigan, Monica Reid
2014-01-01
This convergent parallel design mixed methods case study of four community colleges explores the relationship between organizational capacity and implementation of data-driven decision making (DDDM). The article also illustrates purposive sampling using replication logic for cross-case analysis and the strengths and weaknesses of quantitizing…
ERIC Educational Resources Information Center
Rosner, Terre Layng
2017-01-01
This study is a mixed-methods, neopragmatist examination of the systems currently being practiced in creative professional companies and the consequential changes in Higher Education Media Arts curricula, supporting a kind of meta-disciplinary pedagogy emerging from the pressures of content and device convergence in industry. The research…
Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study
ERIC Educational Resources Information Center
Ogilvie, Emily; McCrudden, Matthew T.
2017-01-01
An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…
A survey of quantum Lyapunov control methods.
Cong, Shuang; Meng, Fangfang
2013-01-01
The condition of a quantum Lyapunov-based control which can be well used in a closed quantum system is that the method can make the system convergent but not just stable. In the convergence study of the quantum Lyapunov control, two situations are classified: nondegenerate cases and degenerate cases. For these two situations, respectively, in this paper the target state is divided into four categories: the eigenstate, the mixed state which commutes with the internal Hamiltonian, the superposition state, and the mixed state which does not commute with the internal Hamiltonian. For these four categories, the quantum Lyapunov control methods for the closed quantum systems are summarized and analyzed. Particularly, the convergence of the control system to the different target states is reviewed, and how to make the convergence conditions be satisfied is summarized and analyzed.
A Survey of Quantum Lyapunov Control Methods
2013-01-01
The condition of a quantum Lyapunov-based control which can be well used in a closed quantum system is that the method can make the system convergent but not just stable. In the convergence study of the quantum Lyapunov control, two situations are classified: nondegenerate cases and degenerate cases. For these two situations, respectively, in this paper the target state is divided into four categories: the eigenstate, the mixed state which commutes with the internal Hamiltonian, the superposition state, and the mixed state which does not commute with the internal Hamiltonian. For these four categories, the quantum Lyapunov control methods for the closed quantum systems are summarized and analyzed. Particularly, the convergence of the control system to the different target states is reviewed, and how to make the convergence conditions be satisfied is summarized and analyzed. PMID:23766732
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
A Mixed Methods Portrait of Urban Instrumental Music Teaching
ERIC Educational Resources Information Center
Fitzpatrick, Kate R.
2011-01-01
The purpose of this mixed methods study was to learn about the ways that instrumental music teachers in Chicago navigated the urban landscape. The design of the study most closely resembles Creswell and Plano Clark's (2007) two-part Triangulation Convergence Mixed Methods Design, with the addition of an initial exploratory focus group component.…
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
ERIC Educational Resources Information Center
Fitzpatrick, Kate R.
2016-01-01
Although the mixing of quantitative and qualitative data is an essential component of mixed methods research, the process of integrating both types of data in meaningful ways can be challenging. The purpose of this article is to describe the use of data labels in mixed methods research as a technique for the integration of qualitative and…
A Mixed-Methods Longitudinal Evaluation of a One-Day Mental Health Wellness Intervention
ERIC Educational Resources Information Center
Doyle, Louise; de Vries, Jan; Higgins, Agnes; Keogh, Brian; McBennett, Padraig; O'Shea, Marié T.
2017-01-01
Objectives: This study evaluated the impact of a one-day mental health Wellness Workshop on participants' mental health and attitudes towards mental health. Design: Convergent, longitudinal mixed-methods approach. Setting: The study evaluated Wellness Workshops which took place throughout the Republic of Ireland. Method: Questionnaires measuring…
NASA Astrophysics Data System (ADS)
Dudar, O. I.; Dudar, E. S.
2017-11-01
The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.
Material nonlinear analysis via mixed-iterative finite element method
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1992-01-01
The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.
Triangulation and Mixed Methods Designs: Data Integration with New Research Technologies
ERIC Educational Resources Information Center
Fielding, Nigel G.
2012-01-01
Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…
Mixed methods research in mental health nursing.
Kettles, A M; Creswell, J W; Zhang, W
2011-08-01
Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.
Veteran Teacher Engagement in Site-Based Professional Development: A Mixed Methods Study
ERIC Educational Resources Information Center
Houston, Biaze L.
2016-01-01
This research study examined how teachers self-report their levels of engagement, which factors they believe contribute most to their engagement, and which assumptions of andragogy most heavily influence teacher engagement in site-based professional development. This study employed a convergent parallel mixed methods design to study veteran…
Technology-Enhanced Multimedia Instruction in Foreign Language Classrooms: A Mixed Methods Study
ERIC Educational Resources Information Center
Ketsman, Olha
2012-01-01
Technology-enhanced multimedia instruction in grades 6 through 12 foreign language classrooms was the focus of this study. The study's findings fill a gap in the literature through the report of how technology-enhanced multimedia instruction was successfully implemented in foreign language classrooms. Convergent parallel mixed methods study…
Iterative combining rules for the van der Waals potentials of mixed rare gas systems
NASA Astrophysics Data System (ADS)
Wei, L. M.; Li, P.; Tang, K. T.
2017-05-01
An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.
Efficient mixing scheme for self-consistent all-electron charge density
NASA Astrophysics Data System (ADS)
Shishidou, Tatsuya; Weinert, Michael
2015-03-01
In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.
ERIC Educational Resources Information Center
Murphy, Joel P.; Murphy, Shirley A.
2016-01-01
A convergent mixed methods research design addressed the extent of benefit obtained from reading culturally inclusive prompts (i.e., four brief essays written by Latino authors) to improve essay writing in a developmental (pre-college) English course. Participants were 45 Latino students who provided quantitative data. Chi square analysis showed…
A Neighborhood Notion of Emergent Literacy: One Mixed Methods Inquiry to Inform Community Learning
ERIC Educational Resources Information Center
Hoffman, Emily Brown; Whittingham, Colleen E.
2017-01-01
Using a convergent parallel mixed methods design, this study considered the early literacy and language environments actualized by childcare providers and parents of young children (ages 3-5) living in one large urban community in the United States of America. Both childcare providers and parents responded to questionnaires and participated in…
ERIC Educational Resources Information Center
Garcia, Gina A.; Huerta, Adrian H.; Ramirez, Jenesis J.; Patrón, Oscar E.
2017-01-01
As the number of Latino males entering college increases, there is a need to understand their unique leadership experiences. This study used a convergent parallel mixed methods design to understand what contexts contribute to Latino male undergraduate students' leadership development, capacity, and experiences. Quantitative data were gathered by…
Mixed Methods Case Study of Generational Patterns in Responses to Shame and Guilt
ERIC Educational Resources Information Center
Ng, Tony
2013-01-01
Moral socialization and moral learning are antecedents of moral motivation. As many as 4 generations interact in workplace and education settings; hence, a deeper understanding of the moral motivation of members of those generations is needed. The purpose of this convergent mixed methods case study was to understand the moral motivation of 5…
The arbitrary order mixed mimetic finite difference method for the diffusion equation
Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco
2016-05-01
Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less
Instructional Coaching in a Small District: A Mixed Methods Study of Teachers' Concerns
ERIC Educational Resources Information Center
Mayfield, Melissa J.
2016-01-01
This study utilized a convergent parallel mixed methods design to study teachers' concerns during implementation of instructional coaching for math in a rural PK-12 district in north Texas over a three-year time period. Five campuses were included in the study: one high school (grades 9-12), one middle school (grades 6-8), and three elementary…
ERIC Educational Resources Information Center
Costa, Ann Marie
2012-01-01
A recent law in a New England state allowed public schools to operate with increased flexibility and autonomy through the authorization of the creation of Innovation Schools. This project study, a program evaluation using a convergent parallel mixed methods research design, allowed for a comprehensive evaluation of the first Innovation School…
ERIC Educational Resources Information Center
Youngs, Howard; Piggot-Irvine, Eileen
2012-01-01
Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
ERIC Educational Resources Information Center
Senra, Hugo
2013-01-01
The current pilot study aims to explore whether different adults' experiences of lower-limb amputation could be associated with different levels of depression. To achieve these study objectives, a convergent parallel mixed methods design was used in a convenience sample of 42 adult amputees (mean age of 61 years; SD = 13.5). All of them had…
ERIC Educational Resources Information Center
Sánchez-Gómez, Ma. Cruz; Pinto-Llorente, Ana Ma.; García-Peñalvo, Francisco José
2017-01-01
In the field of teaching a second language (L2), technology has always occupied a relevant position. The development of new technological tools has allowed the convergence of two learning environments, traditional face-to-face learning and virtual learning. This convergence has fostered the advantages of both types of instructions and the…
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.
2016-01-01
Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Newcomer Immigrant Adolescents: A Mixed-Methods Examination of Family Stressors and School Outcomes
ERIC Educational Resources Information Center
Patel, Sita G.; Clarke, Annette V.; Eltareb, Fazia; Macciomei, Erynn E.; Wickham, Robert E.
2016-01-01
Family stressors predict negative psychological outcomes for immigrant adolescents, yet little is known about how such stressors interact to predict school outcomes. The purpose of this study was to explore the interactive role of family stressors on school outcomes for newcomer adolescent immigrants. Using a convergent parallel mixed-methods…
ERIC Educational Resources Information Center
Isyar, Özge Özgür; Akay, Cenk
2017-01-01
The purpose of this research is to determine the classroom teachers' sense of efficacy about the drama in education, to examine them in terms of various variables and to reveal their opinions and metaphorical perceptions regarding the concept of drama in education. Convergent parallel design, which is of the mixed method designs, was used in the…
ERIC Educational Resources Information Center
Shugart, Kelli Palmer
2017-01-01
Because of the limited research on the perceptions of nursing faculty on horizontal violence, this convergent mixed method study investigated the phenomenon of bullying behaviors among nursing faculty and the faculty's intent to stay in academe following exposure to bullying. 300 nursing faculty members of the Nursing Educator Discussion list…
Hosono, Nobuhiko; Gochomori, Mika; Matsuda, Ryotaro; Sato, Hiroshi; Kitagawa, Susumu
2016-05-25
We herein report the divergent and convergent synthesis of coordination star polymers (CSP) by using metal-organic polyhedrons (MOPs) as a multifunctional core. For the divergent route, copper-based great rhombicuboctahedral MOPs decorated with dithiobenzoate or trithioester chain transfer groups at the periphery were designed. Subsequent reversible addition-fragmentation chain transfer (RAFT) polymerization of monomers mediated by the MOPs gave star polymers, in which 24 polymeric arms were grafted from the MOP core. On the other hand, the convergent route provided identical CSP architectures by simple mixing of a macroligand and copper ions. Isophthalic acid-terminated polymers (so-called macroligands) immediately formed the corresponding CSPs through a coordination reaction with copper(II) ions. This convergent route enabled us to obtain miktoarm CSPs with tunable chain compositions through ligand mixing alone. This powerful method allows instant access to a wide variety of multicomponent star polymers that conventionally have required highly skilled and multistep syntheses. MOP-core CSPs are a new class of star polymer that can offer a design strategy for highly processable porous soft materials by using coordination nanocages as a building component.
NASA Astrophysics Data System (ADS)
Khosravi Parsa, Mohsen; Hormozi, Faramarz
2014-06-01
In the present work, a passive model of a micromixer with sinusoidal side walls, a convergent-divergent cross section and a T-shape entrance was experimentally fabricated and modeled. The main aim of this modeling was to conduct a study on the Dean and separation vortices created inside the sinusoidal microchannels with a convergent-divergent cross section. To fabricate the microchannels, CO2 laser micromachining was utilized and the fluid mixing pattern is observed using a digital microscope imaging system. Also, computational fluid dynamics was applied with the finite element method to solve Navier-Stokes equations and the diffusion-convection mode in inlet Reynolds numbers of 0.2-75. Numerically obtained results were in reasonable agreement with experimental data. According to the previous studies, phase shift and wavelength of side walls are important parameters in designing sinusoidal microchannels. An increase of phase shift between side walls of microchannels leads the cross section being convergent-divergent. Results also show that at an inlet Reynolds number of <20 the molecular diffusion is the dominant mixing factor and the mixing index extent is nearly identical in all designs. For higher inlet Reynolds numbers (>20), secondary flow is the main factor of mixing. Noticeably, mixing index drastically depends on phase shift (ϕ) and wavelength of side walls (λ) such that the best mixing can be observed in ϕ = 3π/4 and at a wavelength to amplitude ratio of 3.3. Likewise, the maximum pressure drop is reported at ϕ = π. Therefore, the sinusoidal microchannel with phase shifts between π/2 and 3π/4 is the best microchannel for biological and chemical analysis, for which a mixing index value higher than 90% and a pressure drop less than 12 kPa is reported.
Study of effects of injector geometry on fuel-air mixing and combustion
NASA Technical Reports Server (NTRS)
Bangert, L. H.; Roach, R. L.
1977-01-01
An implicit finite-difference method has been developed for computing the flow in the near field of a fuel injector as part of a broader study of the effects of fuel injector geometry on fuel-air mixing and combustion. Detailed numerical results have been obtained for cases of laminar and turbulent flow without base injection, corresponding to the supersonic base flow problem. These numerical results indicated that the method is stable and convergent, and that significant savings in computer time can be achieved, compared with explicit methods.
Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study.
Ogilvie, Emily; McCrudden, Matthew T
2017-09-01
An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents viewed (a) the ESDM goals as appropriate for their children, (b) the intervention procedures as acceptable and appropriate, and (c) whether changes in their children's behavior was practically significant. Parents of four children who participated in the ESDM completed the TARF-R questionnaire and participated in a semi-structured interview. Both data sets indicated that parents rated their experiences with the ESDM positively and rated it as socially-valid. The findings indicated that what was implemented in the intervention is complemented by how it was implemented and by whom.
Mixing noise reduction for rectangular supersonic jets by nozzle shaping and induced screech mixing
NASA Technical Reports Server (NTRS)
Rice, Edward J.; Raman, Ganesh
1993-01-01
Two methods of mixing noise modification were studied for supersonic jets flowing from rectangular nozzles with an aspect ratio of about five and a small dimension of about 1.4 cm. The first involves nozzle geometry variation using either single (unsymmetrical) or double bevelled (symmetrical) thirty degree cutbacks of the nozzle exit. Both converging (C) and converging-diverging (C-D) versions were tested. The double bevelled C-D nozzle produced a jet mixing noise reduction of about 4 dB compared to a standard rectangular C-D nozzle. In addition all bevelled nozzles produced an upstream shift in peak mixing noise which is conducive to improved attenuation when the nozzle is used in an acoustically treated duct. A large increase in high frequency noise also occurred near the plane of the nozzle exit. Because of near normal incidence, this noise can be easily attenuated with wall treatment. The second approach uses paddles inserted on the edge of the two sides of the jet to induce screech and greatly enhance the jet mixing. Although screech and mixing noise levels are increased, the enhanced mixing moves the source locations upstream and may make an enclosed system more amenable to noise reduction using wall acoustic treatment.
Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
1994-01-01
Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.
Computation of steady nozzle flow by a time-dependent method
NASA Technical Reports Server (NTRS)
Cline, M. C.
1974-01-01
The equations of motion governing steady, inviscid flow are of a mixed type, that is, hyperbolic in the supersonic region and elliptic in the subsonic region. These mathematical difficulties may be removed by using the so-called time-dependent method, where the governing equations become hyperbolic everywhere. The steady-state solution may be obtained as the asymptotic solution for large time. The object of this research was to develop a production type computer program capable of solving converging, converging-diverging, and plug two-dimensional nozzle flows in computational times of 1 min or less on a CDC 6600 computer.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
Design considerations for divers' breathing gas systems
NASA Technical Reports Server (NTRS)
Hansen, O. R.
1972-01-01
Some of the design methods used to establish the gas storage, mixing, and transfer requirements for existing deep dive systems are discussed. Gas mixing systems appear essential to provide the low oxygen concentration mixtures within the converging tolerance range dictated by applications to increasing depths. Time related use of gas together with the performance of the gas transfer system insures a reasonable time frame for systems application.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Pedersen, Maria; Overgaard, Dorthe; Andersen, Ingelise; Baastrup, Marie; Egerod, Ingrid
2018-05-17
To explore the extent to which the qualitative and quantitative data converge and explain mechanisms and drivers of social inequality in cardiac rehabilitation attendance. Social inequality in cardiac rehabilitation attendance has been a recognized problem for many years. However, to date the mechanisms driving these inequalities are still not fully understood. The study was designed as a convergent mixed methods study. From March 2015 - March 2016, patients hospitalized with acute coronary syndrome to two Danish regional hospitals were included in a quantitative prospective observational study (N=302). Qualitative interview informants (N=24) were sampled from the quantitative study population and half brought a close relative (N=12) for dyadic interviews. Interviews were conducted from August 2015 to February 2016. Integrated analyses were conducted in joint displays by merging the quantitative and qualitative findings. Qualitative and quantitative findings primarily confirmed and expanded each other; however, discordant results were also evident. Integrated analyses identified socially differentiated lifestyles, health beliefs, travel barriers and self-efficacy as potential drivers of social inequality in cardiac rehabilitation. Our study adds empirical evidence regarding how a mixed methods study can be used to obtain an understanding of complex healthcare problems. The study provides new knowledge concerning the mechanisms driving social inequality in cardiac rehabilitation attendance. To prevent social inequality, cardiac rehabilitation should be accommodated to patients with a history of unhealthy behaviour and low self-efficacy. Additionally, the rehabilitation programme should be offered in locations not requiring a long commute. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
School Principals' Opinions on In-Class Inspections
ERIC Educational Resources Information Center
Kayikci, Kemal; Sahin, Ahmet; Canturk, Gokhan
2016-01-01
The aim of this research is to determine school principals' opinions on the in-class inspections carried out by inspectors of the Ministry of National Education of Turkey (MEB). The study was modeled as a convergent parallel design, one of the mixed methods which combined qualitative and quantitative methods. For data collection, the researchers…
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The present treatment of elliptic regions via hyperbolic flux-splitting and high order methods proposes a flux splitting in which the corresponding Jacobians have real and positive/negative eigenvalues. While resembling the flux splitting used in hyperbolic systems, the present generalization of such splitting to elliptic regions allows the handling of mixed-type systems in a unified and heuristically stable fashion. The van der Waals fluid-dynamics equation is used. Convergence with good resolution to weak solutions for various Riemann problems are observed.
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-01-01
Objective. To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. Data Source/Study Setting. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Study Design. Convergent case study mixed methods design. Data Collection/Extraction Methods. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Principal Findings. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Conclusions. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. PMID:24279836
Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe
2006-08-17
Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.
ERIC Educational Resources Information Center
Fidan, Nuray Kurtdede; Ergün, Mustafa
2016-01-01
In this study, social, literary and technological sources used by classroom teachers in social studies courses are analyzed in terms of frequency. The study employs mixed methods research and is designed following the convergent parallel design. In the qualitative part of the study, phenomenological method was used and in the quantitative…
Improved methods of vibration analysis of pretwisted, airfoil blades
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.
1984-01-01
Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-12-01
To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Convergent case study mixed methods design. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. © Health Research and Educational Trust.
A Systematic Review of Mixed Methods Research on Human Factors and Ergonomics in Health Care
Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail
2016-01-01
This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research. PMID:26154228
Stakeholders' Views of South Korea's Higher Education Internationalization Policy
ERIC Educational Resources Information Center
Cho, Young Ha; Palmer, John D.
2013-01-01
The study investigated the stakeholders' perceptions of South Korea's higher education internationalization policy. Based on the research framework that defines four policy values--propriety, effectiveness, diversity, and engagement, the convergence model was employed with a concurrent mixed method sampling strategy to analyze the stakeholders'…
The numerical modelling of mixing phenomena of nanofluids in passive micromixers
NASA Astrophysics Data System (ADS)
Milotin, R.; Lelea, D.
2018-01-01
The paper deals with the rapid mixing phenomena in micro-mixing devices with four tangential injections and converging tube, considering nanoparticles and water as the base fluid. Several parameters like Reynolds number (Re = 6 - 284) or fluid temperature are considered in order to optimize the process and obtain fundamental insight in mixing phenomena. The set of partial differential equations is considered based on conservation of momentum and species. Commercial package software Ansys-Fluent is used for solution of differential equations, based on a finite volume method. The results reveal that mixing index and mixing process is strongly dependent both on Reynolds number and heat flux. Moreover there is a certain Reynolds number when flow instabilities are generated that intensify the mixing process due to the tangential injections of the fluids.
A brief measure of attitudes toward mixed methods research in psychology.
Roberts, Lynne D; Povee, Kate
2014-01-01
The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
A Technological Acceptance of Remote Laboratory in Chemistry Education
ERIC Educational Resources Information Center
Ling, Wendy Sing Yii; Lee, Tien Tien; Tho, Siew Wei
2017-01-01
The purpose of this study is to evaluate the technological acceptance of Chemistry students, and the opinions of Chemistry lecturers and laboratory assistants towards the use of remote laboratory in Chemistry education. The convergent parallel design mixed method was carried out in this study. The instruments involved were questionnaire and…
A novel family of DG methods for diffusion problems
NASA Astrophysics Data System (ADS)
Johnson, Philip; Johnsen, Eric
2017-11-01
We describe and demonstrate a novel family of numerical schemes for handling elliptic/parabolic PDE behavior within the discontinuous Galerkin (DG) framework. Starting from the mixed-form approach commonly applied for handling diffusion (examples include Local DG and BR2), the new schemes apply the Recovery concept of Van Leer to handle cell interface terms. By applying recovery within the mixed-form approach, we have designed multiple schemes that show better accuracy than other mixed-form approaches while being more flexible and easier to implement than the Recovery DG schemes of Van Leer. While typical mixed-form approaches converge at rate 2p in the cell-average or functional error norms (where p is the order of the solution polynomial), many of our approaches achieve order 2p +2 convergence. In this talk, we will describe multiple schemes, including both compact and non-compact implementations; the compact approaches use only interface-connected neighbors to form the residual for each element, while the non-compact approaches add one extra layer to the stencil. In addition to testing the schemes on purely parabolic PDE problems, we apply them to handle the diffusive flux terms in advection-diffusion systems, such as the compressible Navier-Stokes equations.
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
ERIC Educational Resources Information Center
Smith, Rachel Naomi
2017-01-01
The purpose of this mixed methods research study was two-fold. First, I compared the findings of the success rates of online mathematics students with the perceived effects of classroom capture software in hopes to find convergence. Second, I used multiple methods in different phases of the study to expand the breadth and range of the effects of…
Observation of Compressible Plasma Mix in Cylindrically Convergent Implosions
NASA Astrophysics Data System (ADS)
Barnes, Cris W.; Batha, Steven H.; Lanier, Nicholas E.; Magelssen, Glenn R.; Tubbs, David L.; Dunne, A. M.; Rothman, Steven R.; Youngs, David L.
2000-10-01
An understanding of hydrodynamic mix in convergent geometry will be of key importance in the development of a robust ignition/burn capability on NIF, LMJ and future pulsed power machines. We have made use of the OMEGA laser facility at the University of Rochester to investigate directly the mix evolution in a convergent geometry, compressible plasma regime. The experiments comprise a plastic cylindrical shell imploded by direct laser irradiation. The cylindrical shell surrounds a lower density plastic foam which provides sufficient back pressure to allow the implosion to stagnate at a sufficiently high radius to permit quantitative radiographic diagnosis of the interface evolution near turnaround. The susceptibility to mix of the shell-foam interface is varied by choosing different density material for the inner shell surface (thus varying the Atwood number). This allows the study of shock-induced Richtmyer-Meshkov growth during the coasting phase, and Rayleigh-Taylor growth during the stagnation phase. The experimental results will be described along with calculational predictions using various radiation hydrodynamics codes and turbulent mix models.
Use and misuse of mixed methods in population oral health research: A scoping review.
Gupta, A; Keuskamp, D
2018-05-30
Despite the known benefits of a mixed methods approach in health research, little is known of its use in the field of population oral health. To map the extent of literature using a mixed methods approach to examine population oral health outcomes. For a comprehensive search of all the available literature published in the English language, databases including PubMed, Dentistry and Oral Sciences Source (DOSS), CINAHL, Web of Science and EMBASE (including Medline) were searched using a range of keywords from inception to October 2017. Only peer-reviewed, population-based studies of oral health outcomes conducted among non-institutionalised participants and using mixed methods were considered eligible for inclusion. Only nine studies met the inclusion criteria and were included in the review. The most frequent oral health outcome investigated was caries experience. However, most studies lacked a theoretical rationale or framework for using mixed methods, or supporting the use of qualitative data. Concurrent triangulation with a convergent design was the most commonly used mixed methods typology for integrating quantitative and qualitative data. The tools used to collect quantitative and qualitative data were mostly limited to surveys and interviews. With growing complexity recognised in the determinants of oral disease, future studies addressing population oral health outcomes are likely to benefit from the use of mixed methods. Explicit consideration of theoretical framework and methodology will strengthen those investigations. Copyright© 2018 Dennis Barber Ltd.
A systematic review of mixed methods research on human factors and ergonomics in health care.
Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail
2015-11-01
This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research. Copyright © 2015. Published by Elsevier Ltd.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
A brief measure of attitudes toward mixed methods research in psychology
Roberts, Lynne D.; Povee, Kate
2014-01-01
The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Applications of mixed-methods methodology in clinical pharmacy research.
Hadi, Muhammad Abdul; Closs, S José
2016-06-01
Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.
Balanced Reading Basals and the Impact on Third-Grade Reading Achievement
ERIC Educational Resources Information Center
Dorsey, Windy
2015-01-01
This convergent parallel mixed method sought to determine if the reading program increased third-grade student achievement. The research questions of the study examined the reading achievement scores of third-grade students and the effectiveness of McGraw-Hill Reading Wonders™. Significant differences were observed when a paired sample t test…
ERIC Educational Resources Information Center
Fettahlioglu, Pinar
2018-01-01
The purpose of this study is to investigate the effect of argumentation implementation applied in the environmental science course on science teacher candidates' environmental education self-efficacy beliefs and perspectives according to environmental problems. In this mixed method research study, convergent parallel design was utilized.…
Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model
ERIC Educational Resources Information Center
Duman, Serap Nur; Akbas, Oktay
2017-01-01
This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…
The Influence of PBL on Students' Self-Efficacy Beliefs in Chemistry
ERIC Educational Resources Information Center
Mataka, Lloyd M.; Grunert Kowalske, Megan
2015-01-01
A convergent mixed methods research study was used to investigate whether or not undergraduate students who participated in a problem-based learning (PBL) laboratory environment improved their self-efficacy beliefs in chemistry. The Chemistry Attitude and Experience Questionnaire (CAEQ) was used as a pre- and post-test to determine changes in…
Exploring and Leveraging Chinese International Students' Strengths for Success
ERIC Educational Resources Information Center
He, Ye; Hutson, Bryant
2018-01-01
This study used an Appreciative Education framework to explore the strengths of Chinese international students and to identify areas where support is needed during their transition to U.S. higher education settings. Using a convergent mixed methods design with data collected from surveys, interviews and focus groups, the complex nature of the…
Reading Habits of College Students in the United States
ERIC Educational Resources Information Center
Huang, SuHua; Capps, Matthew; Blacklock, Jeff; Garza, Mary
2014-01-01
This study employed a convergent mixed-method research design to investigate reading habits of American college students. A total of 1,265 (466 male and 799 female) college students voluntarily participated in the study by completing a self-reported survey. Twelve students participated in semi-structured interviews and classroom observations.…
Authentic Reading, Writing, and Discussion: An Exploratory Study of a Pen Pal Project
ERIC Educational Resources Information Center
Gambrell, Linda B.; Hughes, Elizabeth M.; Calvert, Leah; Malloy, Jacquelynn A.; Igo, Brent
2011-01-01
In this exploratory study, reading, writing, and discussion were examined within the context of a pen pal intervention focusing on authentic literacy tasks. The study employed a mixed-method design with a triangulation-convergence model to explore the relationship between authentic literacy tasks and the literacy motivation of elementary students…
Humor Climate of the Primary Schools
ERIC Educational Resources Information Center
Sahin, Ahmet
2018-01-01
The aim of this study is to determine the opinions primary school administrators and teachers on humor climates in primary schools. The study was modeled as a convergent parallel design, one of the mixed methods. The data gathered from 253 administrator questionnaires, and 651 teacher questionnaires was evaluated for the quantitative part of the…
NASA Astrophysics Data System (ADS)
Negrello, Camille; Gosselet, Pierre; Rey, Christian
2018-05-01
An efficient method for solving large nonlinear problems combines Newton solvers and Domain Decomposition Methods (DDM). In the DDM framework, the boundary conditions can be chosen to be primal, dual or mixed. The mixed approach presents the advantage to be eligible for the research of an optimal interface parameter (often called impedance) which can increase the convergence rate. The optimal value for this parameter is often too expensive to be computed exactly in practice: an approximate version has to be sought for, along with a compromise between efficiency and computational cost. In the context of parallel algorithms for solving nonlinear structural mechanical problems, we propose a new heuristic for the impedance which combines short and long range effects at a low computational cost.
Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.
2015-01-01
PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Guetterman, Timothy C; Fetters, Michael D; Creswell, John W
2015-11-01
Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.
ERIC Educational Resources Information Center
Tafazoli, Dara; Gómez Parra, Mª Elena; Huertas Abril, Cristina A.
2018-01-01
The purpose of this study was to compare the attitude of Iranian and non-Iranian English language students' attitudes towards Computer-Assisted Language Learning (CALL). Furthermore, the relations of gender, education level, and age to their attitude are investigated. A convergent mixed methods design was used for analyzing both quantitative and…
ERIC Educational Resources Information Center
Demir, Selcuk Besir; Pismek, Nuray
2018-01-01
In today's educational landscape, social studies classes are characterized by controversial issues (CIs) that teachers handle differently using various ideologies. These CIs have become more and more popular, particularly in heterogeneous communities. The actual classroom practices for teaching social studies courses are unclear in the context of…
ERIC Educational Resources Information Center
Sezer, Adem; Inel, Yusuf; Seçkin, Ahmet Çagdas; Uluçinar, Ufuk
2017-01-01
This study aimed to detect any relationship that may exist between classroom teacher candidates' class participation and their attention levels. The research method was a convergent parallel design, mixing quantitative and qualitative research techniques, and the study group was composed of 21 freshmen studying in the Classroom Teaching Department…
Adaptation to a Curriculum Delivered via iPad: The Challenge of Being Early Adopters
ERIC Educational Resources Information Center
Stec, Melissa; Bauer, Melanie; Hopgood, Daniel; Beery, Theresa
2018-01-01
This convergent mixed methods study was designed to examine the skills and attitudes toward using an iPad to deliver nursing curriculum and enhance active learning strategies for sophomore Bachelor of Science in Nursing (BSN) and Doctor of Nursing Practice (DNP) students at a Midwestern university. Quantitative data were collected using an…
ERIC Educational Resources Information Center
Hoffman, Lynn M.; Nottis, Katharyn E. K.
2008-01-01
This mixed-methods study examines young adolescents' perceptions of strategies implemented before a state-mandated "high-stakes" test. Survey results for Grade 8 students (N = 215) are analyzed by sex, academic group, and preparation team. Letters to the principal are reviewed for convergence and additional themes. Although students were most…
ERIC Educational Resources Information Center
Papa, Dorothy P.
2017-01-01
This exploratory mixed method convergent parallel study examined Connecticut Educational leadership preparation programs for the existence of mental health content to learn the extent to which pre-service school leaders are prepared for addressing student mental health. Interviews were conducted with school mental health experts and Connecticut…
Health status convergence at the local level: empirical evidence from Austria
2011-01-01
Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
Parallelized implicit propagators for the finite-difference Schrödinger equation
NASA Astrophysics Data System (ADS)
Parker, Jonathan; Taylor, K. T.
1995-08-01
We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.
Rashev, Svetoslav; Moule, David C; Rashev, Vladimir
2012-11-01
We perform converged high precision variational calculations to determine the frequencies of a large number of vibrational levels in S(0) D(2)CO, extending from low to very high excess vibrational energies. For the calculations we use our specific vibrational method (recently employed for studies on H(2)CO), consisting of a combination of a search/selection algorithm and a Lanczos iteration procedure. Using the same method we perform large scale converged calculations on the vibrational level spectral structure and fragmentation at selected highly excited overtone states, up to excess vibrational energies of ∼17,000 cm(-1), in order to study the characteristics of intramolecular vibrational redistribution (IVR), vibrational level density and mode selectivity. Copyright © 2012 Elsevier B.V. All rights reserved.
Statistical independence of the initial conditions in chaotic mixing.
García de la Cruz, J M; Vassilicos, J C; Rossi, L
2017-11-01
Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.
Exploring partners' perspectives on participation in heart failure home care: a mixed-method design.
Näsström, Lena; Luttik, Marie Louise; Idvall, Ewa; Strömberg, Anna
2017-05-01
To describe the partners' perspectives on participation in the care for patients with heart failure receiving home care. Partners are often involved in care of patients with heart failure and have an important role in improving patients' well-being and self-care. Partners have described both negative and positive experiences of involvement, but knowledge of how partners of patients with heart failure view participation in care when the patient receives home care is lacking. A convergent parallel mixed-method design was used, including data from interviews and questionnaires. A purposeful sample of 15 partners was used. Data collection lasted between February 2010 - December 2011. Interviews were analysed with content analysis and data from questionnaires (participation, caregiving, health-related quality of life, depressive symptoms) were analysed statistically. Finally, results were merged, interpreted and labelled as comparable and convergent or as being inconsistent. Partners were satisfied with most aspects of participation, information and contact. Qualitative findings revealed four different aspects of participation: adapting to the caring needs and illness trajectory, coping with caregiving demands, interacting with healthcare providers and need for knowledge to comprehend the health situation. Results showed confirmatory results that were convergent and expanded knowledge that gave a broader understanding of partner participation in this context. The results revealed different levels of partner participation. Heart failure home care included good opportunities for both participation and contact during home visits, necessary to meet partners' ongoing need for information to comprehend the situation. © 2016 John Wiley & Sons Ltd.
Designing a mixed methods study in primary care.
Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V
2004-01-01
Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.
Daly, Tamara; Banerjee, Albert; Armstrong, Pat; Armstrong, Hugh; Szebehely, Marta
2011-06-01
We conducted a mixed-methods study-- the focus of this article--to understand how workers in long-term care facilities experienced working conditions. We surveyed unionized care workers in Ontario (n = 917); we also surveyed workers in three Canadian provinces (n = 948) and four Scandinavian countries (n = 1,625). In post-survey focus groups, we presented respondents with survey questions and descriptive statistical findings, and asked them: "Does this reflect your experience?" Workers reported time pressures and the frequency of experiences of physical violence and unwanted sexual attention, as we explain. We discuss how iteratively mixing qualitative and quantitative methods to triangulate survey and focus group results led to expected data convergence and to unexpected data divergence that revealed a normalized culture of structural violence in long-term care facilities. We discuss how the finding of structural violence emerged and also the deeper meaning, context, and insights resulting from our combined methods.
A partitioned correlation function interaction approach for describing electron correlation in atoms
NASA Astrophysics Data System (ADS)
Verdebout, S.; Rynkun, P.; Jönsson, P.; Gaigalas, G.; Froese Fischer, C.; Godefroid, M.
2013-04-01
The traditional multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) methods are based on a single orthonormal orbital basis. For atoms with many closed core shells, or complicated shell structures, a large orbital basis is needed to saturate the different electron correlation effects such as valence, core-valence and correlation within the core shells. The large orbital basis leads to massive configuration state function (CSF) expansions that are difficult to handle, even on large computer systems. We show that it is possible to relax the orthonormality restriction on the orbital basis and break down the originally very large calculations into a series of smaller calculations that can be run in parallel. Each calculation determines a partitioned correlation function (PCF) that accounts for a specific correlation effect. The PCFs are built on optimally localized orbital sets and are added to a zero-order multireference (MR) function to form a total wave function. The expansion coefficients of the PCFs are determined from a low dimensional generalized eigenvalue problem. The interaction and overlap matrices are computed using a biorthonormal transformation technique (Verdebout et al 2010 J. Phys. B: At. Mol. Phys. 43 074017). The new method, called partitioned correlation function interaction (PCFI), converges rapidly with respect to the orbital basis and gives total energies that are lower than the ones from ordinary MCHF and CI calculations. The PCFI method is also very flexible when it comes to targeting different electron correlation effects. Focusing our attention on neutral lithium, we show that by dedicating a PCF to the single excitations from the core, spin- and orbital-polarization effects can be captured very efficiently, leading to highly improved convergence patterns for hyperfine parameters compared with MCHF calculations based on a single orthogonal radial orbital basis. By collecting separately optimized PCFs to correct the MR function, the variational degrees of freedom in the relative mixing coefficients of the CSFs building the PCFs are inhibited. The constraints on the mixing coefficients lead to small off-sets in computed properties such as hyperfine structure, isotope shift and transition rates, with respect to the correct values. By (partially) deconstraining the mixing coefficients one converges to the correct limits and keeps the tremendous advantage of improved convergence rates that comes from the use of several orbital sets. Reducing ultimately each PCF to a single CSF with its own orbital basis leads to a non-orthogonal CI approach. Various perspectives of the new method are given.
Carcone, April Idalski; Barton, Ellen; Eggly, Susan; Brogan Hartlieb, Kathryn E.; Thominet, Luke; Naar, Sylvie
2016-01-01
Objective We conducted an exploratory mixed methods study to describe the ambivalence African-American adolescents and their caregivers expressed during motivational interviewing sessions targeting weight loss. Methods We extracted ambivalence statements from 37 previously coded counseling sessions. We used directed content analysis to categorize ambivalence related to the target behaviors of nutrition, activity, or weight. We compared adolescent-caregiver dyads’ ambivalence using the paired sample t-test and Wilcoxon signed-rank test. We then used conventional content analysis to compare the specific content of adolescents’ and caregivers’ ambivalence statements. Results Adolescents and caregivers expressed the same number of ambivalence statements overall, related to activity and weight, but caregivers expressed more statements about nutrition. Content analysis revealed convergences and divergences in caregivers’ and adolescents’ ambivalence about weight loss. Conclusion Understanding divergences in adolescent-caregiver ambivalence about the specific behaviors to target may partially explain the limited success of family-based weight loss interventions targeting African American families and provides a unique opportunity for providers to enhance family communication, foster teamwork, and build self-efficacy to promote behavior change. Practice implications Clinicians working in family contexts should explore how adolescents and caregivers converge and diverge in their ambivalence in order to recommend weight loss strategies that best meet families’ needs. PMID:26916012
ERIC Educational Resources Information Center
Bullock, Emma P.; Shumway, Jessica F.; Watts, Christina M.; Moyer-Packenham, Patricia S.
2017-01-01
The purpose of this study was to contribute to the research on mathematics app use by very young children, and specifically mathematics apps for touch-screen mobile devices that contain virtual manipulatives. The study used a convergent parallel mixed methods design, in which quantitative and qualitative data were collected in parallel, analyzed…
Evolving an Accelerated School Model through Student Perceptions and Student Outcome Data
ERIC Educational Resources Information Center
Braun, Donna L.; Gable, Robert K.; Billups, Felice D.; Vieira, Mary; Blasczak, Danielle
2016-01-01
A mixed methods convergent evaluation informed the redesign of an innovative public school that uses an accelerated model to serve grades 7-9 students who have been retained in grade level and are at risk for dropping out of school. After over 25 years in operation, a shift of practices/policies away from grade retention and toward social…
ERIC Educational Resources Information Center
Bozkur, B. Ümit; Erim, Ali; Çelik-Demiray, Pinar
2018-01-01
This research investigates the effect of individual voice training on pre-service Turkish language teachers' speaking skills. The main claim in this research is that teachers' most significant teaching instrument is their voice and it needs to be trained. The research was based on the convergent parallel mixed method. The quantitative part was…
ERIC Educational Resources Information Center
Marsh, Julie A.; McCombs, Jennifer Sloan; Martorell, Francisco
2010-01-01
This article examines the convergence of two popular school improvement policies: instructional coaching and data-driven decision making (DDDM). Drawing on a mixed methods study of a statewide reading coach program in Florida middle schools, the article examines how coaches support DDDM and how this support relates to student and teacher outcomes.…
Evaluating ICT Integration in Turkish K-12 Schools through Teachers' Views
ERIC Educational Resources Information Center
Aydin, Mehmet Kemal; Gürol, Mehmet; Vanderlinde, Ruben
2016-01-01
The current study aims to explore ICT integration in Turkish K-12 schools purposively selected as a representation of F@tih and non-F@tih public schools together with a private school. A convergent mixed methods design was employed with a multiple case strategy as such it will enable to make casewise comparisons. The quantitative data was…
Blind separation of positive sources by globally convergent gradient search.
Oja, Erkki; Plumbley, Mark
2004-09-01
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.
Mix Model Comparison of Low Feed-Through Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.
2016-10-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Designing A Mixed Methods Study In Primary Care
Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.
2004-01-01
BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277
Maisuradze, Gia G; Leitner, David M
2007-05-15
Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Bizoń, Piotr; Chmaj, Tadeusz; Szpak, Nikodem
2011-10-01
We study dynamics near the threshold for blowup in the focusing nonlinear Klein-Gordon equation utt - uxx + u - |u|2αu = 0 on the line. Using mixed numerical and analytical methods we find that solutions starting from even initial data, fine-tuned to the threshold, are trapped by the static solution S for intermediate times. The details of trapping are shown to depend on the power α, namely, we observe fast convergence to S for α > 1, slow convergence for α = 1, and very slow (if any) convergence for 0 < α < 1. Our findings are complementary with respect to the recent rigorous analysis of the same problem (for α > 2) by Krieger, Nakanishi, and Schlag ["Global dynamics above from the ground state energy for the one-dimensional NLKG equation," preprint arXiv:1011.1776 [math.AP
Measuring patterns in team interaction sequences using a discrete recurrence approach.
Gorman, Jamie C; Cooke, Nancy J; Amazeen, Polemnia G; Fouse, Shannon
2012-08-01
Recurrence-based measures of communication determinism and pattern information are described and validated using previously collected team interaction data. Team coordination dynamics has revealed that"mixing" team membership can lead to flexible interaction processes, but keeping a team "intact" can lead to rigid interaction processes. We hypothesized that communication of intact teams would have greater determinism and higher pattern information compared to that of mixed teams. Determinism and pattern information were measured from three-person Uninhabited Air Vehicle team communication sequences over a series of 40-minute missions. Because team members communicated using push-to-talk buttons, communication sequences were automatically generated during each mission. The Composition x Mission determinism effect was significant. Intact teams' determinism increased over missions, whereas mixed teams' determinism did not change. Intact teams had significantly higher maximum pattern information than mixed teams. Results from these new communication analysis methods converge with content-based methods and support our hypotheses. Because they are not content based, and because they are automatic and fast, these new methods may be amenable to real-time communication pattern analysis.
Numerical Modeling of Fuel Injection into an Accelerating, Turning Flow with a Cavity
NASA Astrophysics Data System (ADS)
Colcord, Ben James
Deliberate continuation of the combustion in the turbine passages of a gas turbine engine has the potential to increase the efficiency and the specific thrust or power of current gas-turbine engines. This concept, known as a turbine-burner, must overcome many challenges before becoming a viable product. One major challenge is the injection, mixing, ignition, and burning of fuel within a short residence time in a turbine passage characterized by large three-dimensional accelerations. One method of increasing the residence time is to inject the fuel into a cavity adjacent to the turbine passage, creating a low-speed zone for mixing and combustion. This situation is simulated numerically, with the turbine passage modeled as a turning, converging channel flow of high-temperature, vitiated air adjacent to a cavity. Both two- and three-dimensional, reacting and non-reacting calculations are performed, examining the effects of channel curvature and convergence, fuel and additional air injection configurations, and inlet conditions. Two-dimensional, non-reacting calculations show that higher aspect ratio cavities improve the fluid interaction between the channel flow and the cavity, and that the cavity dimensions are important for enhancing the mixing. Two-dimensional, reacting calculations show that converging channels improve the combustion efficiency. Channel curvature can be either beneficial or detrimental to combustion efficiency, depending on the location of the cavity and the fuel and air injection configuration. Three-dimensional, reacting calculations show that injecting fuel and air so as to disrupt the natural motion of the cavity stimulates three-dimensional instability and improves the combustion efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Dale A.
This model description is supplemental to the Lawrence Livermore National Laboratory (LLNL) report LLNL-TR-642494, Technoeconomic Evaluation of MEA versus Mixed Amines for CO2 Removal at Near- Commercial Scale at Duke Energy Gibson 3 Plant. We describe the assumptions and methodology used in the Laboratory’s simulation of its understanding of Huaneng’s novel amine solvent for CO2 capture with 35% mixed amine. The results of that simulation have been described in LLNL-TR-642494. The simulation was performed using ASPEN 7.0. The composition of the Huaneng’s novel amine solvent was estimated based on information gleaned from Huaneng patents. The chemistry of the process wasmore » described using nine equations, representing reactions within the absorber and stripper columns using the ELECTNRTL property method. As a rate-based ASPEN simulation model was not available to Lawrence Livermore at the time of writing, the height of a theoretical plate was estimated using open literature for similar processes. Composition of the flue gas was estimated based on information supplied by Duke Energy for Unit 3 of the Gibson plant. The simulation was scaled at one million short tons of CO2 absorbed per year. To aid stability of the model, convergence of the main solvent recycle loop was implemented manually, as described in the Blocks section below. Automatic convergence of this loop led to instability during the model iterations. Manual convergence of the loop enabled accurate representation and maintenance of model stability.« less
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
A high-order staggered meshless method for elliptic problems
Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston
2017-03-21
Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less
Jordan, Gerald; Malla, Ashok; Iyer, Srividya N
2016-07-25
The suffering people experience following a first episode of psychosis is great, and has been well-investigated. Conversely, potential positive outcomes following a first episode of psychosis have been under-investigated. One such outcome that may result from a first episode of psychosis is posttraumatic growth, or a positive aftermath following the trauma of a first psychotic episode. While posttraumatic growth has been described following other physical and mental illnesses, posttraumatic growth has received very little attention following a first episode of psychosis. To address this research gap, we will conduct a mixed methods study aimed at answering two research questions: 1) How do people experience posttraumatic growth following a first episode of psychosis? 2) What predicts, or facilitates, posttraumatic growth following a first episode of psychosis? The research questions will be investigated using a mixed methods convergent design. All participants will be service-users being offered treatment for a first episode of psychosis at a specialized early intervention service for young people with psychosis, as well as their case managers.. A qualitative descriptive methodology will guide data-collection through semi-structured interviews with service-users. Service-users and case managers will complete questionnaires related to posttraumatic growth and its potential predictors using quantitative methods. These predictors include the impact a first episode of psychosis on service-users' lives, the coping strategies they use; the level of social support they enjoy; and their experiences of resilience and recovery. Qualitative data will be subject to thematic analysis, quantitative data will be subject to multiple regression analyses, and results from both methods will be combined to answer the research questions in a holistic way. Findings from this study are expected to show that in addition to suffering, people with a first episode of psychosis may experience positive changes. This study will be one of few to have investigated posttraumatic growth following a first episode of psychosis, and will be the first to do so with a mixed methods approach.
Mixed convection flow of viscoelastic fluid by a stretching cylinder with heat transfer.
Hayat, Tasawar; Anwar, Muhammad Shoaib; Farooq, Muhammad; Alsaedi, Ahmad
2015-01-01
Flow of viscoelastic fluid due to an impermeable stretching cylinder is discussed. Effects of mixed convection and variable thermal conductivity are present. Thermal conductivity is taken temperature dependent. Nonlinear partial differential system is reduced into the nonlinear ordinary differential system. Resulting nonlinear system is computed for the convergent series solutions. Numerical values of skin friction coefficient and Nusselt number are computed and discussed. The results obtained with the current method are in agreement with previous studies using other methods as well as theoretical ideas. Physical interpretation reflecting the contribution of influential parameters in the present flow is presented. It is hoped that present study serves as a stimulus for modeling further stretching flows especially in polymeric and paper production processes.
Mixed Convection Flow of Viscoelastic Fluid by a Stretching Cylinder with Heat Transfer
Hayat, Tasawar; Anwar, Muhammad Shoaib; Farooq, Muhammad; Alsaedi, Ahmad
2015-01-01
Flow of viscoelastic fluid due to an impermeable stretching cylinder is discussed. Effects of mixed convection and variable thermal conductivity are present. Thermal conductivity is taken temperature dependent. Nonlinear partial differential system is reduced into the nonlinear ordinary differential system. Resulting nonlinear system is computed for the convergent series solutions. Numerical values of skin friction coefficient and Nusselt number are computed and discussed. The results obtained with the current method are in agreement with previous studies using other methods as well as theoretical ideas. Physical interpretation reflecting the contribution of influential parameters in the present flow is presented. It is hoped that present study serves as a stimulus for modeling further stretching flows especially in polymeric and paper production processes. PMID:25775032
A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection
NASA Technical Reports Server (NTRS)
Buell, Jeffrey C.
1988-01-01
A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
2017-04-20
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
The role of mixed methods in improved cookstove research.
Stanistreet, Debbi; Hyseni, Lirije; Bashin, Michelle; Sadumah, Ibrahim; Pope, Daniel; Sage, Michael; Bruce, Nigel
2015-01-01
The challenge of promoting access to clean and efficient household energy for cooking and heating is a critical issue facing low- and middle-income countries today. Along with clean fuels, improved cookstoves (ICSs) continue to play an important part in efforts to reduce the 4 million annual premature deaths attributed to household air pollution. Although a range of ICSs are available, there is little empirical evidence on appropriate behavior change approaches to inform adoption and sustained used at scale. Specifically, evaluations using either quantitative or qualitative methods provide an incomplete picture of the challenges in facilitating ICS adoption. This article examines how studies that use the strengths of both these approaches can offer important insights into behavior change in relation to ICS uptake and scale-up. Epistemological approaches, study design frameworks, methods of data collection, analytical approaches, and issues of validity and reliability in the context of mixed methods ICS research are examined, and the article presents an example study design from an evaluation study in Kenya incorporating a nested approach and a convergent case oriented design. The authors discuss the benefits and methodological challenges of mixed-methods approaches in the context of researching behavior change and ICS use recognizing that such methods represent relatively uncharted territory. The authors propose that more published examples are needed to provide frameworks for other researchers seeking to apply mixed methods in this context and suggest a comprehensive research agenda is required that incorporates integrated mixed-methods approaches, to provide best evidence for future scale-up.
ERIC Educational Resources Information Center
Ulubey, Özgür; Yildirim, Kasim; Alpaslan, Muhammet Mustafa; Aykaç, Necdet
2017-01-01
The purpose of the current study is to determine teachers' opinions about professional development schools. In the current study; one of the mixed methods, the convergent design was employed. The sampling of the quantitative dimension of the study is comprised of 256 teachers working in 21 elementary and secondary schools in the city of Mugla. The…
ERIC Educational Resources Information Center
Handlos DeVoe, Debra Jean
2016-01-01
The Hispanic population in the United States is changing and will constitute 30% of the population in 2050; however, the Hispanic registered nurse population is less than 3%. Cultural differences between patients and nurses may cause harm and a mistrust that can affect patient outcomes. A mixed methods convergent research study was done by an…
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
NASA Astrophysics Data System (ADS)
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A
2012-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. Copyright © 2011 Elsevier B.V. All rights reserved.
Wagner, Karla D.; Davidson, Peter J.; Pollini, Robin A.; Strathdee, Steffanie A.; Washburn, Rachel; Palinkas, Lawrence A.
2011-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, while conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors’ research on HIV risk among injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a needle/syringe exchange program in Los Angeles, California, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. PMID:21680168
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Qualitative and mixed methods in mental health services and implementation research.
Palinkas, Lawrence A
2014-01-01
Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This article reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the articles included in this special series along with representative examples from the literature. Qualitative methods are used to provide a "thick description" or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods but often differ with respect to study design, data collection, and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semistructured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed-method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research.
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.
2014-01-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A
2014-06-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.
ERIC Educational Resources Information Center
Ünlü, Melihan
2018-01-01
The purpose of the current study was to investigate the effect of micro-teaching practices with concrete models on the pre-service teachers' self-efficacy beliefs about using concrete models and to determine the opinions of the pre-service teachers about this issue. In the current study, one of the mixed methods, the convergent design (embedded)…
Kolbe, Nina; Kugler, Christiane; Schnepp, Wilfried; Jaarsma, Tiny
2016-01-01
Patients with heart failure (HF) often worry about resuming sexual activity and may need information. Nurses have a role in helping patients to live with the consequences of HF and can be expected to discuss patients' sexual concerns. The aims of this study were to identify whether nurses discuss consequences of HF on sexuality with patients and to explore their perceived role and barriers regarding this topic. A cross-sectional research design with a convergent parallel mixed method approach was used combining qualitative and quantitative data collected with a self-reported questionnaire. Nurses in this study rarely addressed sexual issues with their patients. The nurses did not feel that discussing sexual concerns with their patients was their responsibility, and only 8% of the nurses expressed confidence to do so. The main phenomenon in discussing sexual concerns seems to be "one of silence": Neither patients nor nurses talk about sexual concerns. Factors influencing this include structural barriers, lack of knowledge and communication skills, as well as relevance of the topic and relationship to patients. Cardiac nurses in Germany rarely practice sexual counseling. It is a phenomenon that is silent. Education and skill-based training might hold potential to "break the silence."
Numerical methods for systems of conservation laws of mixed type using flux splitting
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1990-01-01
The essentially non-oscillatory (ENO) finite difference scheme is applied to systems of conservation laws of mixed hyperbolic-elliptic type. A flux splitting, with the corresponding Jacobi matrices having real and positive/negative eigenvalues, is used. The hyperbolic ENO operator is applied separately. The scheme is numerically tested on the van der Waals equation in fluid dynamics. Convergence was observed with good resolution to weak solutions for various Riemann problems, which are then numerically checked to be admissible as the viscosity-capillarity limits. The interesting phenomena of the shrinking of elliptic regions if they are present in the initial conditions were also observed.
Higher order temporal finite element methods through mixed formalisms.
Kim, Jinkyu
2014-01-01
The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
NASA Astrophysics Data System (ADS)
Cui, Sheng; Qiu, Chen; Ke, Changjian; He, Sheng; Liu, Deming
2015-11-01
This paper presents a method which is able to monitor the chromatic dispersion (CD) and identify the modulation format (MF) of optical signals simultaneously. This method utilizes the features of the output curve of the highly sensitive all-optical CD monitor based on four wave mixing (FWM). From the symmetric center of the curve CD can be estimated blindly and independently, while from the profile and convergence region of the curve ten commonly used modulation formats can be recognized with simple algorithm based on maximum correlation classifier. This technique does not need any high speed optoelectronics and has no limitation on signal rate. Furthermore it can tolerate large CD distortions and is robust to polarization mode dispersion (PMD) and amplified spontaneous emission (ASE) noise.
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Holdeman, James D.
1991-01-01
Experimental and computational results on the mixing of single, double, and opposed rows of jets with an isothermal or variable temperature mainstream in a confined subsonic crossflow are summarized. The studies were performed to investigate flow and geometric variations typical of the complex 3-D flowfield in the dilution zone of combustion chambers in gas turbine engines. The principal observations from the experiments were that the momentum-flux ratio was the most significant flow variable, and that temperature distributions were similar (independent of orifice diameter) when the orifice spacing and the square-root of the momentum-flux ratio were inversely proportional. The experiments and empirical model for the mixing of a single row of jets from round holes were extended to include several variations typical of gas turbine combustors. Combinations of flow and geometry that gave optimum mixing were identified from the experimental results. Based on results of calculations made with a 3-D numerical model, the empirical model was further extended to model the effects of curvature and convergence. The principle conclusions from this study were that the orifice spacing and momentum-flux relationships were the same as observed previously in a straight duct, but the jet structure was significantly different for jets injected from the inner wall wall of a turn than for those injected from the outer wall. Also, curvature in the axial direction caused a drift of the jet trajectories toward the inner wall, but the mixing in a turning and converging channel did not seem to be inhibited by the convergence, independent of whether the convergence was radial or circumferential. The calculated jet penetration and mixing in an annulus were similar to those in a rectangular duct when the orifice spacing was specified at the radius dividing the annulus into equal areas.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Mixed methods systematic review exploring mentorship outcomes in nursing academia.
Nowell, Lorelli; Norris, Jill M; Mrklas, Kelly; White, Deborah E
2017-03-01
The aim of this study was to report on a mixed methods systematic review that critically examines the evidence for mentorship in nursing academia. Nursing education institutions globally have issued calls for mentorship. There is emerging evidence to support the value of mentorship in other disciplines, but the extant state of the evidence in nursing academia is not known. A comprehensive review of the evidence is required. A mixed methods systematic review. Five databases (MEDLINE, CINAHL, EMBASE, ERIC, PsycINFO) were searched using an a priori search strategy from inception to 2 November 2015 to identify quantitative, qualitative and mixed methods studies. Grey literature searches were also conducted in electronic databases (ProQuest Dissertations and Theses, Index to Theses) and mentorship conference proceedings and by hand searching the reference lists of eligible studies. Study quality was assessed prior to inclusion using standardized critical appraisal instruments from the Joanna Briggs Institute. A convergent qualitative synthesis design was used where results from qualitative, quantitative and mixed methods studies were transformed into qualitative findings. Mentorship outcomes were mapped to a theory-informed framework. Thirty-four studies were included in this review, from the 3001 records initially retrieved. In general, mentorship had a positive impact on behavioural, career, attitudinal, relational and motivational outcomes; however, the methodological quality of studies was weak. This review can inform the objectives of mentorship interventions and contribute to a more rigorous approach to studies that assess mentorship outcomes. © 2016 John Wiley & Sons Ltd.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
Discrete conservation properties for shallow water flows using mixed mimetic spectral elements
NASA Astrophysics Data System (ADS)
Lee, D.; Palha, A.; Gerritsma, M.
2018-03-01
A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.
Taylor, Jennifer A; Barnes, Brittany; Davis, Andrea L; Wright, Jasmine; Widman, Shannon; LeVasseur, Michael
2016-02-01
Struck by injuries experienced by females were observed to be higher compared to males in an urban fire department. The disparity was investigated while gaining a grounded understanding of EMS responder experiences from patient-initiated violence. A convergent parallel mixed methods design was employed. Using a linked injury dataset, patient-initiated violence estimates were calculated comparing genders. Semi-structured interviews and a focus group were conducted with injured EMS responders. Paramedics had significantly higher odds for patient-initiated violence injuries than firefighters (OR 14.4, 95%CI: 9.2-22.2, P < 0.001). Females reported increased odds of patient-initiated violence injuries compared to males (OR = 6.25, 95%CI 3.8-10.2), but this relationship was entirely mediated through occupation (AOR = 1.64, 95%CI 0.94-2.85). Qualitative data illuminated the impact of patient-initiated violence and highlighted important organizational opportunities for intervention. Mixed methods greatly enhanced the assessment of EMS responder patient-initiated violence prevention. © 2016 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Hatley, Leshell April Denise
2016-01-01
Today, most young people in the United States (U.S.) live technology-saturated lives. Their educational, entertainment, and career options originate from and demand incredible technological innovations. However, this extensive ownership of and access to technology does not indicate that today's youth know how technology works or how to control and…
Fincke, James R [Idaho Falls, ID; Detering, Brent A [Idaho Falls, ID
2009-08-18
An apparatus for thermal conversion of one or more reactants to desired end products includes an insulated reactor chamber having a high temperature heater such as a plasma torch at its inlet end and, optionally, a restrictive convergent-divergent nozzle at its outlet end. In a thermal conversion method, reactants are injected upstream from the reactor chamber and thoroughly mixed with the plasma stream before entering the reactor chamber. The reactor chamber has a reaction zone that is maintained at a substantially uniform temperature. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle, which "freezes" the desired end product(s) in the heated equilibrium reaction stage, or is discharged through an outlet pipe without the convergent-divergent nozzle. The desired end products are then separated from the gaseous stream.
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
An improved 3D MoF method based on analytical partial derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2016-12-01
MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.
Luther, Lauren; Firmin, Ruth L; Lysaker, Paul H; Minor, Kyle S; Salyers, Michelle P
2018-04-07
An array of self-reported, clinician-rated, and performance-based measures has been used to assess motivation in schizophrenia; however, the convergent validity evidence for these motivation assessment methods is mixed. The current study is a series of meta-analyses that summarize the relationships between methods of motivation measurement in 45 studies of people with schizophrenia. The overall mean effect size between self-reported and clinician-rated motivation measures (r = 0.27, k = 33) was significant, positive, and approaching medium in magnitude, and the overall effect size between performance-based and clinician-rated motivation measures (r = 0.21, k = 11) was positive, significant, and small in magnitude. The overall mean effect size between self-reported and performance-based motivation measures was negligible and non-significant (r = -0.001, k = 2), but this meta-analysis was underpowered. Findings suggest modest convergent validity between clinician-rated and both self-reported and performance-based motivation measures, but additional work is needed to clarify the convergent validity between self-reported and performance-based measures. Further, there is likely more variability than similarity in the underlying construct that is being assessed across the three methods, particularly between the performance-based and other motivation measurement types. These motivation assessment methods should not be used interchangeably, and measures should be more precisely described as the specific motivational construct or domain they are capturing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
Development of Jet Noise Power Spectral Laws Using SHJAR Data
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2009-01-01
High quality jet noise spectral data measured at the Aeroacoustic Propulsion Laboratory at the NASA Glenn Research Center is used to examine a number of jet noise scaling laws. Configurations considered in the present study consist of convergent and convergent-divergent axisymmetric nozzles. Following the work of Viswanathan, velocity power factors are estimated using a least squares fit on spectral power density as a function of jet temperature and observer angle. The regression parameters are scrutinized for their uncertainty within the desired confidence margins. As an immediate application of the velocity power laws, spectral density in supersonic jets are decomposed into their respective components attributed to the jet mixing noise and broadband shock associated noise. Subsequent application of the least squares method on the shock power intensity shows that the latter also scales with some power of the shock parameter. A modified shock parameter is defined in order to reduce the dependency of the regression factors on the nozzle design point within the uncertainty margins of the least squares method.
Chung, C K; Shih, T R; Chen, T C; Wu, B H
2008-10-01
A planar micromixer with rhombic microchannels and a converging-diverging element has been systematically investigated by the Taguchi method, CFD-ACE simulations and experiments. To reduce the footprint and extend the operation range of Reynolds number, Taguchi method was used to numerically study the performance of the micromixer in a L(9) orthogonal array. Mixing efficiency is prominently influenced by geometrical parameters and Reynolds number (Re). The four factors in a L(9) orthogonal array are number of rhombi, turning angle, width of the rhombic channel and width of the throat. The degree of sensitivity by Taguchi method can be ranked as: Number of rhombi > Width of the rhombic channel > Width of the throat > Turning angle of the rhombic channel. Increasing the number of rhombi, reducing the width of the rhombic channel and throat and lowering the turning angle resulted in better fluid mixing efficiency. The optimal design of the micromixer in simulations indicates over 90% mixing efficiency at both Re > or = 80 and Re < or = 0.1. Experimental results in the optimal simulations are consistent with the simulated one. This planar rhombic micromixer has simplified the complex fabrication process of the multi-layer or three-dimensional micromixers and improved the performance of a previous rhombic micromixer at a reduced footprint and lower Re.
Air-sea interaction at the subtropical convergence south of Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouault, M.; Lutjeharms, J.R.E.; Ballegooyen, R.C. van
1994-12-31
The oceanic region south of Africa plays a key role in the control of Southern Africa weather and climate. This is particularly the case for the Subtropical Convergence region, the northern border of the Southern Ocean. An extensive research cruise to investigate this specific front was carried out during June and July 1993. A strong front, the Subtropical Convergence was identified, however its geographic disposition was complicated by the presence of an intense warm eddy detached from the Agulhas current. The warm surface water in the eddy created a strong contrast between it and the overlying atmosphere. Oceanographic measurements (XBTmore » and CTD) were jointly made with radiosonde observations and air-sea interaction measurements. The air-sea interaction measurement system included a Gill sonic anemometer, an Ophir infrared hygrometer, an Eppley pyranometer, an Eppley pyrgeometer and a Vaissala temperature and relative humidity probe. Turbulent fluxes of momentum, sensible heat and latent heat were calculated in real time using the inertial dissipation method and the bulk method. All these measurements allowed a thorough investigation of the net heat loss of the ocean, the deepening of the mixed layer during a severe storm as well as the structure of the atmospheric boundary layer and ocean-atmosphere exchanges.« less
Workplace-related generational characteristics of nurses: A mixed-method systematic review.
Stevanin, Simone; Palese, Alvisa; Bressan, Valentina; Vehviläinen-Julkunen, Katri; Kvist, Tarja
2018-06-01
The aim of this study was to describe and summarize workplace characteristics of three nursing generations: Baby Boomers, Generations X and Y. Generational differences affect occupational well-being, nurses' performance, patient outcomes and safety; therefore, nurse managers, administrators and educators are interested increasingly in making evidence-based decisions about the multigenerational nursing workforce. Mixed-method systematic review. Medline, CINAHL, PsycINFO and Scopus (January 1991-January 2017). (1) The Joanna Briggs Institute's method for conducting mixed-method systematic reviews; (2) the Preferred Reporting Items for Systematic Reviews and Meta-Analyses and (3) the Enhancing Transparency in Reporting the Synthesis of Qualitative Research guidelines. The studies' methodological quality was assessed with the Mixed-Methods Appraisal Tool. Quantitative and mixed-method studies were transformed into qualitative methods using a convergent qualitative synthesis and qualitative findings were combined with a narrative synthesis. Thirty-three studies were included with three main themes and 11 subthemes: (1) Job attitudes (work engagement; turnover intentions, reasons for leaving; reasons, incentives/disincentives to continue nursing); (2) Emotion-related job aspects (stress/resilience; well-being/job satisfaction; affective commitment; unit climate; work ethic) and (3) Practice and leadership-related aspects (autonomy; perceived competence; leadership relationships and perceptions). Baby Boomers reported lower levels of stress and burnout than did Generations X and Y, different work engagement, factors affecting workplace well-being and retention and greater intention to leave compared with Generation Y, which was less resilient, but more cohesive. Although several studies reported methodological limitations and conflicting findings, generational differences in nurses' job attitudes, emotional, practice and leadership factors should be considered to enhance workplace quality. © 2018 John Wiley & Sons Ltd.
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam
2015-01-01
The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mones, Letif; Jones, Andrew; Götz, Andreas W.
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
An Optimal Order Nonnested Mixed Multigrid Method for Generalized Stokes Problems
NASA Technical Reports Server (NTRS)
Deng, Qingping
1996-01-01
A multigrid algorithm is developed and analyzed for generalized Stokes problems discretized by various nonnested mixed finite elements within a unified framework. It is abstractly proved by an element-independent analysis that the multigrid algorithm converges with an optimal order if there exists a 'good' prolongation operator. A technique to construct a 'good' prolongation operator for nonnested multilevel finite element spaces is proposed. Its basic idea is to introduce a sequence of auxiliary nested multilevel finite element spaces and define a prolongation operator as a composite operator of two single grid level operators. This makes not only the construction of a prolongation operator much easier (the final explicit forms of such prolongation operators are fairly simple), but the verification of the approximate properties for prolongation operators is also simplified. Finally, as an application, the framework and technique is applied to seven typical nonnested mixed finite elements.
Mackay, Bethany A; Shochet, Ian M; Orr, Jayne A
2017-11-01
Despite increased depression in adolescents with Autism Spectrum Disorder (ASD), effective prevention approaches for this population are limited. A mixed methods pilot randomised controlled trial (N = 29) of the evidence-based Resourceful Adolescent Program-Autism Spectrum Disorder (RAP-A-ASD) designed to prevent depression was conducted in schools with adolescents with ASD in years 6 and 7. Quantitative results showed significant intervention effects on parent reports of adolescent coping self-efficacy (maintained at 6 month follow-up) but no effect on depressive symptoms or mental health. Qualitative outcomes reflected perceived improvements from the intervention for adolescents' coping self-efficacy, self-confidence, social skills, and affect regulation. Converging results remain encouraging given this population's difficulties coping with adversity, managing emotions and interacting socially which strongly influence developmental outcomes.
ERIC Educational Resources Information Center
Halliday, Steven W.
2012-01-01
This sequential explanatory mixed methods study tested the learning effectiveness of a codex book against a convergent media resource based on the same content. It also investigated whether users of the two formats reported any differences in their liking of the two formats, or in their tendency to be persuaded to the degree that they altered…
Mixed Integer PDE Constrained Optimization for the Control of a Wildfire Hazard
2017-01-01
are nodes suitable for extinguishing the fire. We introduce a discretization of the time horizon [0, T] by the set of time T := {0, At,..., ntZ\\t = T...of the constraints and objective with a discrete counterpart. The PDE is replaced by a linear system obtained from a convergent finite difference...method [5] and the integral is replaced by a quadrature formula. The domain is discretized by replacing 17 with an equidistant grid of length Ax
Koen, Joshua D.; Yonelinas, Andrew P.
2014-01-01
Although it is generally accepted that aging is associated with recollection impairments, there is considerable disagreement surrounding how healthy aging influences familiarity-based recognition. One factor that might contribute to the mixed findings regarding age differences in familiarity is the estimation method used to quantify the two mnemonic processes. Here, this issue is examined by having a group of older adults (N = 39) between 40 and 81 years of age complete Remember/Know (RK), receiver operating characteristic (ROC), and process dissociation (PD) recognition tests. Estimates of recollection, but not familiarity, showed a significant negative correlation with chronological age. Inconsistent with previous findings, the estimation method did not moderate the relationship between age and estimations of recollection and familiarity. In a final analysis, recollection and familiarity were estimated as latent factors in a confirmatory factor analysis (CFA) that modeled the covariance between measures of free recall and recognition, and the results converged with the results from the RK, PD, and ROC tasks. These results are consistent with the hypothesis that episodic memory declines in older adults are primary driven by recollection deficits, and also suggest that the estimation method plays little to no role in age-related decreases in familiarity. PMID:25485974
A 2D and 3D Code Comparison of Turbulent Mixing in Spherical Implosions
NASA Astrophysics Data System (ADS)
Flaig, Markus; Thornber, Ben; Grieves, Brian; Youngs, David; Williams, Robin; Clark, Dan; Weber, Chris
2016-10-01
Turbulent mixing due to Richtmyer-Meshkov and Rayleigh-Taylor instabilities has proven to be a major obstacle on the way to achieving ignition in inertial confinement fusion (ICF) implosions. Numerical simulations are an important tool for understanding the mixing process, however, the results of such simulations depend on the choice of grid geometry and the numerical scheme used. In order to clarify this issue, we compare the simulation codes FLASH, TURMOIL, HYDRA, MIRANDA and FLAMENCO for the problem of the growth of single- and multi-mode perturbations on the inner interface of a dense imploding shell. We consider two setups: A single-shock setup with a convergence ratio of 4, as well as a higher convergence multi-shock setup that mimics a typical NIF mixcap experiment. We employ both singlemode and ICF-like broadband perturbations. We find good agreement between all codes concerning the evolution of the mix layer width, however, the are differences in the small scale mixing. We also develop a Bell-Plesset model that is able to predict the mix layer width and find excellent agreement with the simulation results. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government.
Qualitative Methods in Mental Health Services Research
Palinkas, Lawrence A.
2014-01-01
Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This paper reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the papers included in this special series along with representative examples from the literature. Qualitative methods are used to provide a “thick description” or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods, but often differ with respect to study design, data collection and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semi-structured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research. PMID:25350675
An efficient mode-splitting method for a curvilinear nearshore circulation model
Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.
2007-01-01
A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.
NASA Astrophysics Data System (ADS)
Pillet, N.; Robin, C.; Dupuis, M.; Hupin, G.; Berger, J.-F.
2017-03-01
The main objective of this paper is to review the state of the art of the multiparticle-multihole configuration mixing approach which was proposed and implemented using the Gogny interaction ˜ 10 years ago. Various theoretical aspects are re-analyzed when a Hamiltonian description is chosen: the link with exact many-body theories, the impact of truncations in the multiconfigurational space, the importance of defining single-particle orbitals which are consistent with the correlations introduced in the many-body wave function, the role of the self-consistency, and more practically the numerical convergence algorithm. Several applications done with the phenomenological effective Gogny interaction are discussed. Finally, future directions to extend and generalize the method are discussed.
Experiments in dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Berenfeld, A.
1983-01-01
Experimental results are given on the mixing of a single row of jets with an isothermal mainstream in a straight duct, to include flow and geometric variations typical of combustion chambers in gas turbine engines. The principal conclusions reached from these experiments were: at constant momentum ratio, variations in density ratio have only a second-order effect on the profiles; a first-order approximation to the mixing of jets with a variable temperature mainstream can be obtained by superimposing the jets-in-an isothermal-crossflow and mainstream profiles; flow area convergence, especially injection-wall convergence, significantly improves the mixing; for opposed rows of jets, with the orifice centerlines in-line, the optimum ratio of orifice spacing to duct height is one half of the optimum value for single side injection at the same momentum ratio; and for opposed rows of jets, with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is twice the optimum value for single side injection at the same momentum ratio.
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Stability of nonuniform rotor blades in hover using a mixed formulation
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Hodges, D. H.; Avila, J. H.; Kung, R. M.
1980-01-01
A mixed formulation for calculating static equilibrium and stability eigenvalues of nonuniform rotor blades in hover is presented. The static equilibrium equations are nonlinear and are solved by an accurate and efficient collocation method. The linearized perturbation equations are solved by a one step, second order integration scheme. The numerical results correlate very well with published results from a nearly identical stability analysis based on a displacement formulation. Slight differences in the results are traced to terms in the equations that relate moments to derivatives of rotations. With the present ordering scheme, in which terms of the order of squares of rotations are neglected with respect to unity, it is not possible to achieve completely equivalent models based on mixed and displacement formulations. The one step methods reveal that a second order Taylor expansion is necessary to achieve good convergence for nonuniform rotating blades. Numerical results for a hypothetical nonuniform blade, including the nonlinear static equilibrium solution, were obtained with no more effort or computer time than that required for a uniform blade.
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
Convergence of Newton's method for a single real equation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.
ERIC Educational Resources Information Center
Nelson, Jason M.; Canivez, Gary L.
2012-01-01
Empirical examination of the Reynolds Intellectual Assessment Scales (RIAS; C. R. Reynolds & R. W. Kamphaus, 2003a) has produced mixed results regarding its internal structure and convergent validity. Various aspects of validity of RIAS scores with a sample (N = 521) of adolescents and adults seeking psychological evaluations at a university-based…
Breakdown of the reaction-diffusion master equation with nonelementary rates
NASA Astrophysics Data System (ADS)
Smith, Stephen; Grima, Ramon
2016-05-01
The chemical master equation (CME) is the exact mathematical formulation of chemical reactions occurring in a dilute and well-mixed volume. The reaction-diffusion master equation (RDME) is a stochastic description of reaction-diffusion processes on a spatial lattice, assuming well mixing only on the length scale of the lattice. It is clear that, for the sake of consistency, the solution of the RDME of a chemical system should converge to the solution of the CME of the same system in the limit of fast diffusion: Indeed, this has been tacitly assumed in most literature concerning the RDME. We show that, in the limit of fast diffusion, the RDME indeed converges to a master equation but not necessarily the CME. We introduce a class of propensity functions, such that if the RDME has propensities exclusively of this class, then the RDME converges to the CME of the same system, whereas if the RDME has propensities not in this class, then convergence is not guaranteed. These are revealed to be elementary and nonelementary propensities, respectively. We also show that independent of the type of propensity, the RDME converges to the CME in the simultaneous limit of fast diffusion and large volumes. We illustrate our results with some simple example systems and argue that the RDME cannot generally be an accurate description of systems with nonelementary rates.
On the Berry-Esséen bound of frequency polygons for ϕ-mixing samples.
Huang, Gan-Ji; Xing, Guodong
2017-01-01
Under some mild assumptions, the Berry-Esséen bound of frequency polygons for ϕ -mixing samples is presented. By the bound derived, we obtain the corresponding convergence rate of uniformly asymptotic normality, which is nearly [Formula: see text] under the given conditions.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
NASA Astrophysics Data System (ADS)
Gyrya, V.; Lipnikov, K.
2017-11-01
We present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, we observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.
Gyrya, V.; Lipnikov, K.
2017-07-18
Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyrya, V.; Lipnikov, K.
Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
NASA Astrophysics Data System (ADS)
Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon
2017-09-01
Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Lubetzky, E.; Martinelli, F.
2015-05-01
The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.
The generalized Lyapunov theorem and its application to quantum channels
NASA Astrophysics Data System (ADS)
Burgarth, Daniel; Giovannetti, Vittorio
2007-05-01
We give a simple and physically intuitive necessary and sufficient condition for a map acting on a compact metric space to be mixing (i.e. infinitely many applications of the map transfer any input into a fixed convergency point). This is a generalization of the 'Lyapunov direct method'. First we prove this theorem in topological spaces and for arbitrary continuous maps. Finally we apply our theorem to maps which are relevant in open quantum systems and quantum information, namely quantum channels. In this context, we also discuss the relations between mixing and ergodicity (i.e. the property that there exists only a single input state which is left invariant by a single application of the map) showing that the two are equivalent when the invariant point of the ergodic map is pure.
Depth and Extent of Gas-Ablator Mix in Symcap Implosions at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Pino, Jesse; Ma, T.; MacLaren, S. A.; Salmonson, J. D.; Ho, D.; Khan, S. F.; Masse, L.; Ralph, J. E.; Czajka, C.; Casey, D.; Sacks, R.; Smalyuk, V. A.; Tipton, R. E.; Kyrala, G. A.
2017-10-01
A longstanding question in ICF physics has been the extent to which capsule ablator material mixes into the burning fusion fuel and degrades performance. Several recent campaigns at the National Ignition Facility have examined this question through the use of separated reactants. A layer of CD plastic is placed on the inner surface of the CH shell and the shell is filled with a gas mixture of H and T. This allows for simultaneous neutron signals that inform different aspects of the physics; we get core TT neutron yield, atomic mix from the DT neutrons, and information about shell heating from the DD neutron signal. By systematically recessing the CD layer away from the gas boundary we gain an inference of the depth of the mixing layer. This presentation will cover three campaigns to look at mixing depth: An ignition-like design (``Low-foot'') at two convergence ratios, as well as a robust, nearly one-dimensional, low convergence, symmetric platform designed to minimize ablation front feed-through (HED 2-shock). We show that the 2-shock capsule has less ablator-gas mix, and compare the experimental results to mix-model simulations. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344, LLNS, LLC.
Yang, Li; Li, Shanshan; Liu, Jixiao; Cheng, Jingmeng
2018-02-01
To explore and utilize the advantages of droplet-based microfluidics, hydrodynamics, and mixing process within droplets traveling though the T junction channel and convergent-divergent sinusoidal microchannels are studied by numerical simulations and experiments, respectively. In the T junction channel, the mixing efficiency is significantly influenced by the twirling effect, which controls the initial distributions of the mixture during the droplet formation stage. Therefore, the internal recirculating flow can create a convection mechanism, thus improving mixing. The twirling effect is noticeably influenced by the velocity of the continuous phase; in the sinusoidal channel, the Dean vortices and droplet deformation are induced by centrifugal force and alternative velocity gradient, thus enhancing the mixing efficiency. The best mixing occurred when the droplet size is comparable with the channel width. Finally, we propose a unique optimized structure, which includes a T junction inlet joined to a sinusoidal channel. In this structure, the mixing of fluids in the droplets follows two routes: One is the twirling effect and symmetric recirculation flow in the straight channel. The other is the asymmetric recirculation and droplet deformation in the winding and variable cross-section. Among the three structures, the optimized structure has the best mixing efficiency at the shortest mixing time (0.25 ms). The combination of the twirling effect, variable cross-section effect, and Dean vortices greatly intensifies the chaotic flow. This study provides the insight of the mixing process and may benefit the design and operations of droplet-based microfluidics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experiments in dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Berenfeld, A.
1983-01-01
Experimental results are presented on the mixing of a single row of jets with an isothermal mainstream in a straight duct, with flow and geometric variations typical of combustion chambers in gas turbine engines included. It is found that at a constant momentum ratio, variations in the density ratio have only a second-order effect on the profiles. A first-order approximation to the mixing of jets with a variable temperature mainstream can, it is found, be obtained by superimposing the jets-in-an-isothermal-crossflow and mainstream profiles. Another finding is that the flow area convergence, especially injection-wall convergence, significantly improves the mixing. For opposed rows of jets with the orifice cone centerlines in-line, the optimum ratio of orifice spacing to duct height is determined to be 1/2 of the optimum value for single injection at the same momentum ratio. For opposed rows of jets with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is found to be twice the optimum value for single side injection at the same momentum ratio.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Alvarez, Joseph L.; Watson, Lloyd D.
1989-01-01
An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid are mixed in a converging-diverging nozzle where the liquid is sheared into small particles which are of a size and uniformly to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas.
NASA Astrophysics Data System (ADS)
de Foy, B.; Clappier, A.; Molina, L. T.; Molina, M. J.
2006-04-01
Mexico City lies in a high altitude basin where air quality and pollutant fate is strongly influenced by local winds. The combination of high terrain with weak synoptic forcing leads to weak and variable winds with complex circulation patterns. A gap wind entering the basin in the afternoon leads to very different wind convergence lines over the city depending on the meteorological conditions. Surface and upper-air meteorological observations are analysed during the MCMA-2003 field campaign to establish the meteorological conditions and obtain an index of the strength and timing of the gap wind. A mesoscale meteorological model (MM5) is used in combination with high-resolution satellite data for the land surface parameters and soil moisture maps derived from diurnal ground temperature range. A simple method to map the lines of wind convergence both in the basin and on the regional scale is used to show the different convergence patterns according to episode types. The gap wind is found to occur on most days of the campaign and is the result of a temperature gradient across the southern basin rim which is very similar from day to day. Momentum mixing from winds aloft into the surface layer is much more variable and can determine both the strength of the flow and the pattern of the convergence zones. Northerly flows aloft lead to a weak jet with an east-west convergence line that progresses northwards in the late afternoon and early evening. Westerlies aloft lead to both stronger gap flows due to channelling and winds over the southern and western basin rim. This results in a north-south convergence line through the middle of the basin starting in the early afternoon. Improved understanding of basin meteorology will lead to better air quality forecasts for the city and better understanding of the chemical regimes in the urban atmosphere.
Symptoms and fear in heart failure patients approaching end of life: a mixed methods study.
Abshire, Martha; Xu, Jiayun; Dennison Himmelfarb, Cheryl; Davidson, Patricia; Sulmasy, Daniel; Kub, Joan; Hughes, Mark; Nolan, Marie
2015-11-01
The purpose of this study was to consider how fear and symptom experience are perceived in patients with heart failure at the end of life. Heart failure is a burdensome condition and mortality rates are high globally. There is substantive literature describing suffering and unmet needs but description of the experience of fear and the relationship with symptom burden is limited. A convergent mixed methods design was used. Data from the McGill Quality of Life Questionnaire (n = 55) were compared to data from in-depth interviews (n = 5). Patients denied fear when asked directly, but frequently referred to moments of being afraid when they were experiencing symptoms. In addition, patients reported few troublesome symptoms on the survey, but mentioned many more symptoms during interviews. These data not only identify the relationship between psychological issues and symptom experience but also elucidate the benefit of a mixed method approach in describing such experiences from the perspective of the patient. Future research should examine relationships between and among symptom experience, fear and other psychological constructs across the illness trajectory. Conversations about the interaction of symptom burden and fear can lead to both a more robust assessment of symptoms and lead to patient centred interventions. © 2015 John Wiley & Sons Ltd.
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Ferenczy, György G
2013-04-05
The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.
Zimmerman, Mark; Chelminski, Iwona; Young, Diane; Dalrymple, Kristy; Martinez, Jennifer H
2014-10-01
To acknowledge the clinical significance of manic features in depressed patients, DSM-5 included criteria for a mixed features specifier for major depressive disorder (MDD). In the present report from the Rhode Island Methods to Improve Diagnostic Assessment and Services (MIDAS) project we modified our previously published depression scale to include a subscale assessing the DSM-5 mixed features specifier. More than 1100 psychiatric outpatients with MDD or bipolar disorder completed the Clinically Useful Depression Outcome Scale (CUDOS) supplemented with questions for the DSM-5 mixed features specifier (CUDOS-M). To examine discriminant and convergent validity the patients were rated on clinician severity indices of depression, anxiety, agitation, and irritability. Discriminant and convergent validity was further examined in a subset of patients who completed other self-report symptom severity scales. Test-retest reliability was examined in a subset who completed the CUDOS-M twice. We compared CUDOS-M scores in patients with MDD, bipolar depression, and hypomania. The CUDOS-M subscale had high internal consistency and test-retest reliability, was more highly correlated with another self-report measure of mania than with measures of depression, anxiety, substance use problems, eating disorders, and anger, and was more highly correlated with clinician severity ratings of agitation and irritability than anxiety and depression. CUDOS-M scores were significantly higher in hypomanic patients than depressed patients, and patients with bipolar depression than patients with MDD. The study was cross-sectional, thus we did not examine whether the CUDOS-M detects emerging mixed symptoms when depressed patients are followed over time. Also, while we examined the correlation between the CUDOS-M and clinician ratings of agitation and irritability, we did not examine the association with a clinician measure of manic symptomatology such as the Young Mania Rating Scale In the present study of a large sample of psychiatric outpatients, the CUDOS-M was a reliable and valid measure of the DSM-5 mixed features specifier for MDD. Copyright © 2014 Elsevier B.V. All rights reserved.
Neutrinoless double-β decay of Se82 in the shell model: Beyond the closure approximation
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.; Brown, B. A.
2014-05-01
We recently proposed a method [R. A. Senkov and M. Horoi, Phys. Rev. C 88, 064312 (2013), 10.1103/PhysRevC.88.064312] to calculate the standard nuclear matrix elements for neutrinoless double-β decay (0νββ) of Ca48 going beyond the closure approximation. Here we extend this analysis to the important case of Se82, which was chosen as the base isotope for the upcoming SuperNEMO experiment. We demonstrate that by using a mixed method that considers information from closure and nonclosure approaches, one can get excellent convergence properties for the nuclear matrix elements, which allows one to avoid unmanageable computational costs. We show that in contrast with the closure approximation the mixed approach has a very weak dependence on the average closure energy. The matrix elements for the heavy neutrino-exchange mechanism that could contribute to the 0νββ decay of Se82 are also presented.
Choice and Constraint in the Negotiation of the Grandparent Role: A Mixed-Methods Study.
McGarrigle, Christine A; Timonen, Virpi; Layte, Richard
2018-01-01
Few studies have examined how the allocation and consequences of grandchild care vary across different socioeconomic groups. We analyze qualitative data alongside data from The Irish Longitudinal Study on Ageing (TILDA), in a convergent mixed-methods approach. Regression models examined characteristics associated with grandchild care, and the relationship between grandchild care and depressive symptoms and well-being. Qualitative data shed light on processes and choices that explain patterns of grandchild care provision. Tertiary-educated grandparents provided less intensive grandchild care compared with primary educated. Qualitative data indicated that this pattern stems from early boundary-drawing among higher educated grandparents while lower socioeconomic groups were constrained and less able to say no. Intensive grandchild care was associated with more depressive symptoms and lower well-being and was moderated by participation in social activities and level of education attainment. The effect of grandchild care on well-being of grandparents depends on whether it is provided by choice or obligation.
Choice and Constraint in the Negotiation of the Grandparent Role: A Mixed-Methods Study
McGarrigle, Christine A.; Timonen, Virpi; Layte, Richard
2018-01-01
Few studies have examined how the allocation and consequences of grandchild care vary across different socioeconomic groups. We analyze qualitative data alongside data from The Irish Longitudinal Study on Ageing (TILDA), in a convergent mixed-methods approach. Regression models examined characteristics associated with grandchild care, and the relationship between grandchild care and depressive symptoms and well-being. Qualitative data shed light on processes and choices that explain patterns of grandchild care provision. Tertiary-educated grandparents provided less intensive grandchild care compared with primary educated. Qualitative data indicated that this pattern stems from early boundary-drawing among higher educated grandparents while lower socioeconomic groups were constrained and less able to say no. Intensive grandchild care was associated with more depressive symptoms and lower well-being and was moderated by participation in social activities and level of education attainment. The effect of grandchild care on well-being of grandparents depends on whether it is provided by choice or obligation. PMID:29372176
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Finlay, Jessica M; Kobayashi, Lindsay C
2018-07-01
Social isolation and loneliness are increasingly prevalent among older adults in the United States, with implications for morbidity and mortality risk. Little research to date has examined the complex person-place transactions that contribute to social well-being in later life. This study aimed to characterize personal and neighborhood contextual influences on social isolation and loneliness among older adults. Interviews were conducted with independent-dwelling men and women (n = 124; mean age 71 years) in the Minneapolis metropolitan area (USA) from June to October, 2015. A convergent mixed-methods design was applied, whereby quantitative and qualitative approaches were used in parallel to gain simultaneous insights into statistical associations and in-depth individual perspectives. Logistic regression models predicted self-reported social isolation and loneliness, adjusted for age, gender, past occupation, race/ethnicity, living alone, street type, residential location, and residential density. Qualitative thematic analyses of interview transcripts probed individual experiences with social isolation and loneliness. The quantitative results suggested that African American adults, those with a higher socioeconomic status, those who did not live alone, and those who lived closer to the city center were less likely to report feeling socially isolated or lonely. The qualitative results identified and explained variation in outcomes within each of these factors. They provided insight on those who lived alone but did not report feeling lonely, finding that solitude was sought after and enjoyed by a portion of participants. Poor physical and mental health often resulted in reporting social isolation, particularly when coupled with poor weather or low-density neighborhoods. At the same time, poor health sometimes provided opportunities for valued social engagement with caregivers, family, and friends. The combination of group-level risk factors and in-depth personal insights provided by this mixed-methodology may be useful to develop strategies that address social isolation and loneliness in older communities. Copyright © 2018 Elsevier Ltd. All rights reserved.
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Abdul-Razzak, Amane; Sherifali, Diana; You, John; Simon, Jessica; Brazil, Kevin
2016-08-01
Despite the recognized importance of end-of-life (EOL) communication between patients and physicians, the extent and quality of such communication is lacking. We sought to understand patient perspectives on physician behaviours during EOL communication. In this mixed methods study, we conducted quantitative and qualitative strands and then merged data sets during a mixed methods analysis phase. In the quantitative strand, we used the quality of communication tool (QOC) to measure physician behaviours that predict global rating of satisfaction in EOL communication skills, while in the qualitative strand we conducted semi-structured interviews. During the mixed methods analysis, we compared and contrasted qualitative and quantitative data. Seriously ill inpatients at three tertiary care hospitals in Canada. We found convergence between qualitative and quantitative strands: patients desire candid information from their physician and a sense of familiarity. The quantitative results (n = 132) suggest a paucity of certain EOL communication behaviours in this seriously ill population with a limited prognosis. The qualitative findings (n = 16) suggest that at times, physicians did not engage in EOL communication despite patient readiness, while sometimes this may represent an appropriate deferral after assessment of a patient's lack of readiness. Avoidance of certain EOL topics may not always be a failure if it is a result of an assessment of lack of patient readiness. This has implications for future tool development: a measure could be built in to assess whether physician behaviours align with patient readiness. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Being outside learning about science is amazing: A mixed methods study
NASA Astrophysics Data System (ADS)
Weibel, Michelle L.
This study used a convergent parallel mixed methods design to examine teachers' environmental attitudes and concerns about an outdoor educational field trip. Converging both quantitative data (Environmental Attitudes Scale and teacher demographics) and qualitative data (Open-Ended Statements of Concern and interviews) facilitated interpretation. Research has shown that adults' attitudes toward the environment strongly influence children's attitudes regarding the environment. Science teachers' attitudes toward nature and attitudes toward children's field experiences influence the number and types of field trips teachers take. Measuring teacher attitudes is a way to assess teacher beliefs. The one day outdoor field trip had significant outcomes for teachers. Quantitative results showed that practicing teachers' environmental attitudes changed following the Forever Earth outdoor field trip intervention. Teacher demographics showed no significance. Interviews provided a more in-depth understanding of teachers' perspectives relating to the field trip and environmental education. Four major themes emerged from the interviews: 1) environmental attitudes, 2) field trip program, 3) integrating environmental education, and 4) concerns. Teachers' major concern, addressed prior to the field trip through the Open-Ended Statements of Concern, was focused on students (i.e., behavior, safety, content knowledge) and was alleviated following the field trip. Interpretation of the results from integrating the quantitative and qualitative results shows that teachers' personal and professional attitudes toward the environment influence their decision to integrate environmental education in classroom instruction. Since the Forever Earth field trip had a positive influence on teachers' environmental attitudes, further research is suggested to observe if teachers integrate environmental education in the classroom to reach the overall goal of increasing environmental literacy.
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashyralyev, Allaberen; Okur, Ulker
In the present paper, the Crank-Nicolson difference scheme for the numerical solution of the stochastic parabolic equation with the dependent operator coefficient is considered. Theorem on convergence estimates for the solution of this difference scheme is established. In applications, convergence estimates for the solution of difference schemes for the numerical solution of three mixed problems for parabolic equations are obtained. The numerical results are given.
Phylogenetic search through partial tree mixing
2012-01-01
Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449
A fictitious domain approach for the Stokes problem based on the extended finite element method
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel; Lozinski, Alexei
2014-01-01
In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.
Using HT and DT gamma rays to diagnose mix in Omega capsule implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, M. J.; Herrmann, H. W.; Kim, Y. H.
Experimental evidence [1] indicates that shell material can be driven into the core of Omega capsule implosions on the same time scale as the initial convergent shock. It has been hypothesized that shock-generated temperatures at the fuel/shell interface in thin exploding pusher capsules diffusively drives shell material into the gas core between the time of shock passage and bang time. Here, we propose a method to temporally resolve and observe the evolution of shell material into the capsule core as a function of fuel/shell interface temperature (which can be varied by varying the capsule shell thickness). Our proposed method usesmore » a CD plastic capsule filled with 50/50 HT gas and diagnosed using gas Cherenkov detection (GCD) to temporally resolve both the HT "clean" and DT "mix" gamma ray burn histories. Simulations using Hydra [2] for an Omega CD-lined capsule with a sub-micron layer of the inside surface of the shell pre-mixed into a fraction of the gas region produce gamma reaction history profiles that are sensitive to the depth to which this material is mixed. Furthermore, we observe these differences as a function of capsule shell thickness is proposed to determine if interface mixing is consistent with thermal diffusion λ ii~T 2/Z 2ρ at the gas/shell interface. Finally, since hydrodynamic mixing from shell perturbations, such as the mounting stalk and glue, could complicate these types of capsule-averaged temporal measurements, simulations including their effects also have been performed showing minimal perturbation of the hot spot geometry.« less
Using HT and DT gamma rays to diagnose mix in Omega capsule implosions
Schmitt, M. J.; Herrmann, H. W.; Kim, Y. H.; ...
2016-05-26
Experimental evidence [1] indicates that shell material can be driven into the core of Omega capsule implosions on the same time scale as the initial convergent shock. It has been hypothesized that shock-generated temperatures at the fuel/shell interface in thin exploding pusher capsules diffusively drives shell material into the gas core between the time of shock passage and bang time. Here, we propose a method to temporally resolve and observe the evolution of shell material into the capsule core as a function of fuel/shell interface temperature (which can be varied by varying the capsule shell thickness). Our proposed method usesmore » a CD plastic capsule filled with 50/50 HT gas and diagnosed using gas Cherenkov detection (GCD) to temporally resolve both the HT "clean" and DT "mix" gamma ray burn histories. Simulations using Hydra [2] for an Omega CD-lined capsule with a sub-micron layer of the inside surface of the shell pre-mixed into a fraction of the gas region produce gamma reaction history profiles that are sensitive to the depth to which this material is mixed. Furthermore, we observe these differences as a function of capsule shell thickness is proposed to determine if interface mixing is consistent with thermal diffusion λ ii~T 2/Z 2ρ at the gas/shell interface. Finally, since hydrodynamic mixing from shell perturbations, such as the mounting stalk and glue, could complicate these types of capsule-averaged temporal measurements, simulations including their effects also have been performed showing minimal perturbation of the hot spot geometry.« less
A New Theory of Mix in Omega Capsule Implosions
NASA Astrophysics Data System (ADS)
Knoll, Dana; Chacon, Luis; Rauenzahn, Rick; Simakov, Andrei; Taitano, William; Welser-Sherrill, Leslie
2014-10-01
We put forth a new mix model that relies on the development of a charge-separation electrostatic double-layer at the fuel-pusher interface early in the implosion of an Omega plastic ablator capsule. The model predicts a sizable pusher mix (several atom %) into the fuel. The expected magnitude of the double-layer field is consistent with recent radial electric field measurements in Omega plastic ablator implosions. Our theory relies on two distinct physics mechanisms. First, and prior to shock breakout, the formation of a double layer at the fuel-pusher interface due to fast preheat-driven ionization. The double-layer electric field structure accelerates pusher ions fairly deep into the fuel. Second, after the double-layer mix has occurred, the inward-directed fuel velocity and temperature gradients behind the converging shock transports these pusher ions inward. We first discuss the foundations of this new mix theory. Next, we discuss our interpretation of the radial electric field measurements on Omega implosions. Then we discuss the second mechanism that is responsible for transporting the pusher material, already mixed via the double-layer deep into the fuel, on the shock convergence time scale. Finally we make a connection to recent mix motivated experimental data on. This work conducted under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory, managed by LANS, LLC under Contract DE-AC52-06NA25396.
Athié, Karen; Menezes, Alice Lopes do Amaral; da Silva, Angela Machado; Campos, Monica; Delgado, Pedro Gabriel; Fortes, Sandra; Dowrick, Christopher
2016-09-30
Community-based primary mental health care is recommended in low and middle-income countries. The Brazilian Health System has been restructuring primary care by expanding its Family Health Strategy. Due to mental health problems, psychosocial vulnerability and accessibility, Matrix Support teams are being set up to broaden the professional scope of primary care. This paper aims to analyse the perceptions of health professionals and managers about the integration of primary care and mental health. In this mixed-method study 18 health managers and 24 professionals were interviewed from different primary and mental health care services in Rio de Janeiro. A semi-structured survey was conducted with 185 closed questions ranging from 1 to 5 and one open-ended question, to evaluate: access, gateway, trust, family focus, primary mental health interventions, mental health records, mental health problems, team collaboration, integration with community resources and primary mental health education. Two comparisons were made: health managers and professionals' (Mann-Whitney non-parametric test) and health managers' perceptions (Kruskall-Wallis non parametric-test) in 4 service designs (General Traditional Outpatients, Mental Health Specialised Outpatients, Psychosocial Community Centre and Family Health Strategy)(SPSS version 17.0). Qualitative data were subjected to Framework Analysis. Firstly, health managers and professionals' perceptions converged in all components, except the health record system. Secondly, managers' perceptions in traditional services contrasted with managers' perceptions in community-based services in components such as mental health interventions and team collaboration, and converged in gateway, trust, record system and primary mental health education. Qualitative data revealed an acceptance of mental health and primary care integration, but a lack of communication between institutions. The Mixed Method demonstrated that interviewees consider mental health and primary care integration as a requirement of the system, while their perceptions and the model of work produced by the institutional culture are inextricably linked. There is a gap between health managers' and professionals' understanding of community-based primary mental health care. The integration of different processes of work entails both rethinking workforce actions and institutional support to help make changes.
A thermodynamically consistent discontinuous Galerkin formulation for interface separation
Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...
2015-07-31
Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less
NASA Technical Reports Server (NTRS)
Graf, Wiley E.
1991-01-01
A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Perspectives on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.
1986-01-01
A microcomputer code which displays 3-D oblique and 2-D plots of the temperature distribution downstream of jets mixing with a confined crossflow has been used to investigate the effects of varying the several independent flow and geometric parameters on the mixing. Temperature profiles calculated with this empirical model are presented to show the effects of orifice size and spacing, momentum flux ratio, density ratio, variable temperature mainstream, flow area convergence, orifice aspect ratio, and opposed and axially staged rows of jets.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.
NASA Astrophysics Data System (ADS)
Sen, Sangita; Tellgren, Erik I.
2018-05-01
External non-uniform magnetic fields acting on molecules induce non-collinear spin densities and spin-symmetry breaking. This necessitates a general two-component Pauli spinor representation. In this paper, we report the implementation of a general Hartree-Fock method, without any spin constraints, for non-perturbative calculations with finite non-uniform fields. London atomic orbitals are used to ensure faster basis convergence as well as invariance under constant gauge shifts of the magnetic vector potential. The implementation has been applied to investigate the joint orbital and spin response to a field gradient—quantified through the anapole moments—of a set of small molecules. The relative contributions of orbital and spin-Zeeman interaction terms have been studied both theoretically and computationally. Spin effects are stronger and show a general paramagnetic behavior for closed shell molecules while orbital effects can have either direction. Basis set convergence and size effects of anapole susceptibility tensors have been reported. The relation of the mixed anapole susceptibility tensor to chirality is also demonstrated.
Problematic Smartphone Use: Investigating Contemporary Experiences Using a Convergent Design
Harkin, Lydia
2018-01-01
Internet-enabled smartphones are increasingly ubiquitous in the Western world. Research suggests a number of problems can result from mobile phone overuse, including dependence, dangerous and prohibited use. For over a decade, this has been measured by the Problematic Mobile Phone Use Questionnaire (PMPU-Q). Given the rapid developments in mobile technologies, changes of use patterns and possible problematic and addictive use, the aim of the present study was to investigate and validate an updated contemporary version of the PMPU-Q (PMPU-Q-R). A mixed methods convergent design was employed, including a psychometric survey (N = 512) alongside qualitative focus groups (N = 21), to elicit experiences and perceptions of problematic smartphone use. The results suggest the PMPU-Q-R factor structure can be updated to include smartphone dependence, dangerous driving, and antisocial smartphone use factors. Theories of problematic mobile phone use require consideration of the ubiquity and indispensability of smartphones in the present day and age, particularly regarding use whilst driving and in social interactions. PMID:29337883
Problematic Smartphone Use: Investigating Contemporary Experiences Using a Convergent Design.
Kuss, Daria J; Harkin, Lydia; Kanjo, Eiman; Billieux, Joel
2018-01-16
Internet-enabled smartphones are increasingly ubiquitous in the Western world. Research suggests a number of problems can result from mobile phone overuse, including dependence, dangerous and prohibited use. For over a decade, this has been measured by the Problematic Mobile Phone Use Questionnaire (PMPU-Q). Given the rapid developments in mobile technologies, changes of use patterns and possible problematic and addictive use, the aim of the present study was to investigate and validate an updated contemporary version of the PMPU-Q (PMPU-Q-R). A mixed methods convergent design was employed, including a psychometric survey ( N = 512) alongside qualitative focus groups ( N = 21), to elicit experiences and perceptions of problematic smartphone use. The results suggest the PMPU-Q-R factor structure can be updated to include smartphone dependence, dangerous driving, and antisocial smartphone use factors. Theories of problematic mobile phone use require consideration of the ubiquity and indispensability of smartphones in the present day and age, particularly regarding use whilst driving and in social interactions.
The study on the control strategy of micro grid considering the economy of energy storage operation
NASA Astrophysics Data System (ADS)
Ma, Zhiwei; Liu, Yiqun; Wang, Xin; Li, Bei; Zeng, Ming
2017-08-01
To optimize the running of micro grid to guarantee the supply and demand balance of electricity, and to promote the utilization of renewable energy. The control strategy of micro grid energy storage system is studied. Firstly, the mixed integer linear programming model is established based on the receding horizon control. Secondly, the modified cuckoo search algorithm is proposed to calculate the model. Finally, a case study is carried out to study the signal characteristic of micro grid and batteries under the optimal control strategy, and the convergence of the modified cuckoo search algorithm is compared with others to verify the validity of the proposed model and method. The results show that, different micro grid running targets can affect the control strategy of energy storage system, which further affect the signal characteristics of the micro grid. Meanwhile, the convergent speed, computing time and the economy of the modified cuckoo search algorithm are improved compared with the traditional cuckoo search algorithm and differential evolution algorithm.
GITLIN, LAURA N.; ROTH, DAVID L.; BURGIO, LOUIS D.; LOEWENSTEIN, DAVID A.; WINTER, LARAINE; NICHOLS, LINDA; ARGÜELLES, SOLEDAD; CORCORAN, MARY; BURNS, ROBERT; MARTINDALE, JENNIFER
2008-01-01
Objective To evaluate psychometric properties and response patterns of the Caregiver Assessment of Function and Upset (CAFU), a 15-item multidimensional measure of dependence in dementia patients and caregiver reaction. Method 640 families were administered the CAFU (53% White, 43% African American, and 4% mixed race and ethnicity). We created a random split of the sample and conducted exploratory factor analyses on Sample 1 and confirmatory factor analyses on Sample 2. Convergent and discriminant validity were evaluated using Spearman rank correlation coefficients. Results A two-factor structure for functional items was derived, and excellent factorial validity was obtained. Convergent and discriminant validity were obtained for function and upset measures. Differential response patterns for dependence and caregiver upset were found for caregiver race, relationship, and care recipient gender but not for caregiver gender. Discussion The CAFU is easily administered, reliable, and valid for evaluating appraisals of dependencies and upsetting care areas. PMID:15750049
Electron impact excitation of molecular hydrogen
Zammit, Mark Christian; Savage, Jeremy S.; Fursa, Dmitry V.; ...
2017-02-06
Here, we report the electron impact integrated and differential cross sections for excitation to the b 3Σmore » $$+\\atop{u}$$, a 3Σ$$+\\atop{g}$$, c 3Π u, B 1Σ$$+\\atop{u}$$, E, F 1Σ$$+\\atop{g}$$, C 1Π u, e 3Σ$$+\\atop{u}$$, h 3Σ $$+\\atop{g}$$, d 3Π u, B'' 1Σ$$+\\atop{u}$$ , D 1Π u, B'' 1Σ$$+\\atop{u}$$, and D' 1Π u states of molecular hydrogen in the energy range from 10 to 300 eV. Total scattering and total ionization cross sections are also presented. The calculations have been performed by using the convergent close-coupling method within the fixed-nuclei approximation. Detailed convergence studies have been performed with respect to the size of the close-coupling expansion and a set of recommended cross sections has been produced. Significant differences with previous calculations are found. Agreement with experiment is mixed, ranging from excellent to poor depending on the transition and incident energies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yijie; Lim, Hyun-Kyung; de Almeida, Valmor F
2012-06-01
This progress report describes the development of a front tracking method for the solution of the governing equations of motion for two-phase micromixing of incompressible, viscous, liquid-liquid solvent extraction processes. The ability to compute the detailed local interfacial structure of the mixture allows characterization of the statistical properties of the two-phase mixture in terms of droplets, filaments, and other structures which emerge as a dispersed phase embedded into a continuous phase. Such a statistical picture provides the information needed for building a consistent coarsened model applicable to the entire mixing device. Coarsening is an undertaking for a future mathematical developmentmore » and is outside the scope of the present work. We present here a method for accurate simulation of the micromixing dynamics of an aqueous and an organic phase exposed to intense centrifugal force and shearing stress. The onset of mixing is the result of the combination of the classical Rayleigh- Taylor and Kelvin-Helmholtz instabilities. A mixing environment that emulates a sector of the annular mixing zone of a centrifugal contactor is used for the mathematical domain. The domain is small enough to allow for resolution of the individual interfacial structures and large enough to allow for an analysis of their statistical distribution of sizes and shapes. A set of accurate algorithms for this application requires an advanced front tracking approach constrained by the incompressibility condition. This research is aimed at designing and implementing these algorithms. We demonstrate verification and convergence results for one-phase and unmixed, two-phase flows. In addition we report on preliminary results for mixed, two-phase flow for realistic operating flow parameters.« less
Solution of the Fokker-Planck equation with mixing of angular harmonics by beam-beam charge exchange
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, D.R.
1989-09-01
A method for solving the linear Fokker-Planck equation with anisotropic beam-beam charge exchange loss is presented. The 2-D equation is transformed to a system of coupled 1-D equations which are solved iteratively as independent equations. Although isotropic approximations to the beam-beam losses lead to inaccurate fast ion distributions, typically only a few angular harmonics are needed to include accurately the effect of the beam-beam charge exchange loss on the usual integrals of the fast ion distribution. Consequently, the algorithm converges very rapidly and is much more efficient than a 2-D finite difference method. A convenient recursion formula for the couplingmore » coefficients is given and generalization of the method is discussed. 13 refs., 2 figs.« less
Moisture convergence using satellite-derived wind fields - A severe local storm case study
NASA Technical Reports Server (NTRS)
Negri, A. J.; Vonder Haar, T. H.
1980-01-01
Five-minute interval 1-km resolution SMS visible channel data were used to derive low-level wind fields by tracking small cumulus clouds on NASA's Atmospheric and Oceanographic Information Processing System. The satellite-derived wind fields were combined with surface mixing ratios to derive horizontal moisture convergence in the prestorm environment of April 24, 1975. Storms began developing in an area extending from southwest Oklahoma to eastern Tennessee 2 h subsequent to the time of the derived fields. The maximum moisture convergence was computed to be 0.0022 g/kg per sec and areas of low-level convergence of moisture were in general indicative of regions of severe storm genesis. The resultant moisture convergence fields derived from two wind sets 20 min apart were spatially consistent and reflected the mesoscale forcing of ensuing storm development. Results are discussed with regard to possible limitations in quantifying the relationship between low-level flow and between low-level flow and satellite-derived cumulus motion in an antecedent storm environment.
Development of Jet Noise Power Spectral Laws
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2011-01-01
High-quality jet noise spectral data measured at the Aero-Acoustic Propulsion Laboratory (AAPL) at NASA Glenn is used to develop jet noise scaling laws. A FORTRAN algorithm was written that provides detailed spectral prediction of component jet noise at user-specified conditions. The model generates quick estimates of the jet mixing noise and the broadband shock-associated noise (BBSN) in single-stream, axis-symmetric jets within a wide range of nozzle operating conditions. Shock noise is emitted when supersonic jets exit a nozzle at imperfectly expanded conditions. A successful scaling of the BBSN allows for this noise component to be predicted in both convergent and convergent-divergent nozzles. Configurations considered in this study consisted of convergent and convergent- divergent nozzles. Velocity exponents for the jet mixing noise were evaluated as a function of observer angle and jet temperature. Similar intensity laws were developed for the broadband shock-associated noise in supersonic jets. A computer program called sJet was developed that provides a quick estimate of component noise in single-stream jets at a wide range of operating conditions. A number of features have been incorporated into the data bank and subsequent scaling in order to improve jet noise predictions. Measurements have been converted to a lossless format. Set points have been carefully selected to minimize the instability-related noise at small aft angles. Regression parameters have been scrutinized for error bounds at each angle. Screech-related amplification noise has been kept to a minimum to ensure that the velocity exponents for the jet mixing noise remain free of amplifications. A shock-noise-intensity scaling has been developed independent of the nozzle design point. The computer program provides detailed narrow-band spectral predictions for component noise (mixing noise and shock associated noise), as well as the total noise. Although the methodology is confined to single streams, efforts are underway to generate a data bank and algorithm applicable to dual-stream jets. Shock-associated noise in high-powered jets such as military aircraft can benefit from these predictions.
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Sebastian, J. D.; Weatherill, W. H.
1979-01-01
Analytical and empirical studies of a finite difference method for the solution of the transonic flow about harmonically oscillating wings and airfoils are presented. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady equations for small disturbances. Since sinusoidal motion is assumed, the unsteady equation is independent of time. Three finite difference investigations are discussed including a new operator for mesh points with supersonic flow, the effects on relaxation solution convergence of adding a viscosity term to the original differential equation, and an alternate and relatively simple downstream boundary condition. A method is developed which uses a finite difference procedure over a limited inner region and an approximate analytical procedure for the remaining outer region. Two investigations concerned with three-dimensional flow are presented. The first is the development of an oblique coordinate system for swept and tapered wings. The second derives the additional terms required to make row relaxation solutions converge when mixed flow is present. A finite span flutter analysis procedure is described using the two-dimensional unsteady transonic program with a full three-dimensional steady velocity potential.
NASA Astrophysics Data System (ADS)
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H.; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter. PMID:24608594
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter.
Geometric MCMC for infinite-dimensional inverse problems
NASA Astrophysics Data System (ADS)
Beskos, Alexandros; Girolami, Mark; Lan, Shiwei; Farrell, Patrick E.; Stuart, Andrew M.
2017-04-01
Bayesian inverse problems often involve sampling posterior distributions on infinite-dimensional function spaces. Traditional Markov chain Monte Carlo (MCMC) algorithms are characterized by deteriorating mixing times upon mesh-refinement, when the finite-dimensional approximations become more accurate. Such methods are typically forced to reduce step-sizes as the discretization gets finer, and thus are expensive as a function of dimension. Recently, a new class of MCMC methods with mesh-independent convergence times has emerged. However, few of them take into account the geometry of the posterior informed by the data. At the same time, recently developed geometric MCMC algorithms have been found to be powerful in exploring complicated distributions that deviate significantly from elliptic Gaussian laws, but are in general computationally intractable for models defined in infinite dimensions. In this work, we combine geometric methods on a finite-dimensional subspace with mesh-independent infinite-dimensional approaches. Our objective is to speed up MCMC mixing times, without significantly increasing the computational cost per step (for instance, in comparison with the vanilla preconditioned Crank-Nicolson (pCN) method). This is achieved by using ideas from geometric MCMC to probe the complex structure of an intrinsic finite-dimensional subspace where most data information concentrates, while retaining robust mixing times as the dimension grows by using pCN-like methods in the complementary subspace. The resulting algorithms are demonstrated in the context of three challenging inverse problems arising in subsurface flow, heat conduction and incompressible flow control. The algorithms exhibit up to two orders of magnitude improvement in sampling efficiency when compared with the pCN method.
Science and technology convergence: with emphasis for nanotechnology-inspired convergence
NASA Astrophysics Data System (ADS)
Bainbridge, William S.; Roco, Mihail C.
2016-07-01
Convergence offers a new universe of discovery, innovation, and application opportunities through specific theories, principles, and methods to be implemented in research, education, production, and other societal activities. Using a holistic approach with shared goals, convergence seeks to transcend existing human limitations to achieve improved conditions for work, learning, aging, physical, and cognitive wellness. This paper outlines ten key theories that offer complementary perspectives on this complex dynamic. Principles and methods are proposed to facilitate and enhance science and technology convergence. Several convergence success stories in the first part of the 21st century—including nanotechnology and other emerging technologies—are discussed in parallel with case studies focused on the future. The formulation of relevant theories, principles, and methods aims at establishing the convergence science.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Microsecond Molecular Dynamics Simulations of Lipid Mixing
2015-01-01
Molecular dynamics (MD) simulations of membranes are often hindered by the slow lateral diffusion of lipids and the limited time scale of MD. In order to study the dynamics of mixing and characterize the lateral distribution of lipids in converged mixtures, we report microsecond-long all-atom MD simulations performed on the special-purpose machine Anton. Two types of mixed bilayers, POPE:POPG (3:1) and POPC:cholesterol (2:1), as well as a pure POPC bilayer, were each simulated for up to 2 μs. These simulations show that POPE:POPG and POPC:cholesterol are each fully miscible at the simulated conditions, with the final states of the mixed bilayers similar to a random mixture. By simulating three POPE:POPG bilayers at different NaCl concentrations (0, 0.15, and 1 M), we also examined the effect of salt concentration on lipid mixing. While an increase in NaCl concentration is shown to affect the area per lipid, tail order, and lipid lateral diffusion, the final states of mixing remain unaltered, which is explained by the largely uniform increase in Na+ ions around POPE and POPG. Direct measurement of water permeation reveals that the POPE:POPG bilayer with 1 M NaCl has reduced water permeability compared with those at zero or low salt concentration. Our calculations provide a benchmark to estimate the convergence time scale of all-atom MD simulations of lipid mixing. Additionally, equilibrated structures of POPE:POPG and POPC:cholesterol, which are frequently used to mimic bacterial and mammalian membranes, respectively, can be used as starting points of simulations involving these membranes. PMID:25237736
Leading for the long haul: a mixed-method evaluation of the Sustainment Leadership Scale (SLS).
Ehrhart, Mark G; Torres, Elisa M; Green, Amy E; Trott, Elise M; Willging, Cathleen E; Moullin, Joanna C; Aarons, Gregory A
2018-01-19
Despite our progress in understanding the organizational context for implementation and specifically the role of leadership in implementation, its role in sustainment has received little attention. This paper took a mixed-method approach to examine leadership during the sustainment phase of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Utilizing the Implementation Leadership Scale as a foundation, we sought to develop a short, practical measure of sustainment leadership that can be used for both applied and research purposes. Data for this study were collected as a part of a larger mixed-method study of evidence-based intervention, SafeCare®, sustainment. Quantitative data were collected from 157 providers using web-based surveys. Confirmatory factor analysis was used to examine the factor structure of the Sustainment Leadership Scale (SLS). Qualitative data were collected from 95 providers who participated in one of 15 focus groups. A framework approach guided qualitative data analysis. Mixed-method integration was also utilized to examine convergence of quantitative and qualitative findings. Confirmatory factor analysis supported the a priori higher order factor structure of the SLS with subscales indicating a single higher order sustainment leadership factor. The SLS demonstrated excellent internal consistency reliability. Qualitative analyses offered support for the dimensions of sustainment leadership captured by the quantitative measure, in addition to uncovering a fifth possible factor, available leadership. This study found qualitative and quantitative support for the pragmatic SLS measure. The SLS can be used for assessing leadership of first-level leaders to understand how staff perceive leadership during sustainment and to suggest areas where leaders could direct more attention in order to increase the likelihood that EBIs are institutionalized into the normal functioning of the organization.
NASA Astrophysics Data System (ADS)
Ahmad, S.; Farooq, M.; Javed, M.; Anjum, Aisha
2018-03-01
A current analysis is carried out to study theoretically the mixed convection characteristics in squeezing flow of Sutterby fluid in squeezed channel. The constitutive equation of Sutterby model is utilized to characterize the rheology of squeezing phenomenon. Flow characteristics are explored with dual stratification. In flowing fluid which contains heat and mass transport, the first order chemical reaction and radiative heat flux affect the transport phenomenon. The systems of non-linear governing equations have been modulating which then solved by mean of convergent approach (Homotopy Analysis Method). The graphs are reported and illustrated for emerging parameters. Through graphical explanations, drag force, rate of heat and mass transport are conversed for different pertinent parameters. It is found that heat and mass transport rate decays with dominant double stratified parameters and chemical reaction parameter. The present two-dimensional examination is applicable in some of the engineering processes and industrial fluid mechanics.
NASA Astrophysics Data System (ADS)
Gupta, Diksha; Kumar, Lokendra; Bég, O. Anwar; Singh, Bani
2017-10-01
The objective of this paper is to study theoretically and numerically the effect of thermal radiation on mixed convection boundary layer flow of a dissipative micropolar non-Newtonian fluid from a continuously moving vertical porous sheet. The governing partial differential equations are transformed into a set of non-linear differential equations by using similarity transformations. These equations are solved iteratively with the Bellman-Kalaba quasi-linearization algorithm. This method converges quadratically and the solution is valid for a large range of parameters. The effects of transpiration (suction or injection) parameter, buoyancy parameter, radiation parameter and Eckert number on velocity, microrotation and temperature functions have been studied. Under a special case comparison of the present numerical results is made with the results available in the literature and an excellent agreement is found. Additionally skin friction and rate of heat transfer have also been computed. The study has applications in polymer processing.
NASA Astrophysics Data System (ADS)
Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.
In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.
A Numerical Study of the Effects of Curvature and Convergence on Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
A numerical study of the effects of curvature and convergence on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2015-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.
Fast reconstruction of high-qubit-number quantum states via low-rate measurements
NASA Astrophysics Data System (ADS)
Li, K.; Zhang, J.; Cong, S.
2017-07-01
Due to the exponential complexity of the resources required by quantum state tomography (QST), people are interested in approaches towards identifying quantum states which require less effort and time. In this paper, we provide a tailored and efficient method for reconstructing mixed quantum states up to 12 (or even more) qubits from an incomplete set of observables subject to noises. Our method is applicable to any pure or nearly pure state ρ and can be extended to many states of interest in quantum information processing, such as a multiparticle entangled W state, Greenberger-Horne-Zeilinger states, and cluster states that are matrix product operators of low dimensions. The method applies the quantum density matrix constraints to a quantum compressive sensing optimization problem and exploits a modified quantum alternating direction multiplier method (quantum-ADMM) to accelerate the convergence. Our algorithm takes 8 ,35 , and 226 seconds, respectively, to reconstruct superposition state density matrices of 10 ,11 ,and12 qubits with acceptable fidelity using less than 1 % of measurements of expectation. To our knowledge it is the fastest realization that people can achieve using a normal desktop. We further discuss applications of this method using experimental data of mixed states obtained in an ion trap experiment of up to 8 qubits.
Mixed mimetic spectral element method for Stokes flow: A pointwise divergence-free solution
NASA Astrophysics Data System (ADS)
Kreeft, Jasper; Gerritsma, Marc
2013-05-01
In this paper we apply the recently developed mimetic discretization method to the mixed formulation of the Stokes problem in terms of vorticity, velocity and pressure. The mimetic discretization presented in this paper and in Kreeft et al. [51] is a higher-order method for curvilinear quadrilaterals and hexahedrals. Fundamental is the underlying structure of oriented geometric objects, the relation between these objects through the boundary operator and how this defines the exterior derivative, representing the grad, curl and div, through the generalized Stokes theorem. The mimetic method presented here uses the language of differential k-forms with k-cochains as their discrete counterpart, and the relations between them in terms of the mimetic operators: reduction, reconstruction and projection. The reconstruction consists of the recently developed mimetic spectral interpolation functions. The most important result of the mimetic framework is the commutation between differentiation at the continuous level with that on the finite dimensional and discrete level. As a result operators like gradient, curl and divergence are discretized exactly. For Stokes flow, this implies a pointwise divergence-free solution. This is confirmed using a set of test cases on both Cartesian and curvilinear meshes. It will be shown that the method converges optimally for all admissible boundary conditions.
Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Bin; Chen, Luoping; Hu, Xiaozhe
2016-03-05
In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less
... followed for improvement in symptoms. What is the method of treatment for convergence insufficiency? Convergence insufficiency can ... be brought in to the office visit. Which method of treatment will be used for an individual ...
The role of hot spot mix in the low-foot and high-foot implosions on the NIF
NASA Astrophysics Data System (ADS)
Ma, T.; Patel, P. K.; Izumi, N.; Springer, P. T.; Key, M. H.; Atherton, L. J.; Barrios, M. A.; Benedetti, L. R.; Bionta, R.; Bond, E.; Bradley, D. K.; Caggiano, J.; Callahan, D. A.; Casey, D. T.; Celliers, P. M.; Cerjan, C. J.; Church, J. A.; Clark, D. S.; Dewald, E. L.; Dittrich, T. R.; Dixit, S. N.; Döppner, T.; Dylla-Spears, R.; Edgell, D. H.; Epstein, R.; Field, J.; Fittinghoff, D. N.; Frenje, J. A.; Gatu Johnson, M.; Glenn, S.; Glenzer, S. H.; Grim, G.; Guler, N.; Haan, S. W.; Hammel, B. A.; Hatarik, R.; Herrmann, H. W.; Hicks, D.; Hinkel, D. E.; Berzak Hopkins, L. F.; Hsing, W. W.; Hurricane, O. A.; Jones, O. S.; Kauffman, R.; Khan, S. F.; Kilkenny, J. D.; Kline, J. L.; Kozioziemski, B.; Kritcher, A.; Kyrala, G. A.; Landen, O. L.; Lindl, J. D.; Le Pape, S.; MacGowan, B. J.; Mackinnon, A. J.; MacPhee, A. G.; Meezan, N. B.; Merrill, F. E.; Moody, J. D.; Moses, E. I.; Nagel, S. R.; Nikroo, A.; Pak, A.; Parham, T.; Park, H.-S.; Ralph, J. E.; Regan, S. P.; Remington, B. A.; Robey, H. F.; Rosen, M. D.; Rygg, J. R.; Ross, J. S.; Salmonson, J. D.; Sater, J.; Sayre, D.; Schneider, M. B.; Shaughnessy, D.; Sio, H.; Spears, B. K.; Smalyuk, V.; Suter, L. J.; Tommasini, R.; Town, R. P. J.; Volegov, P. L.; Wan, A.; Weber, S. V.; Widmann, K.; Wilde, C. H.; Yeamans, C.; Edwards, M. J.
2017-05-01
Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the "low-foot" class of implosions appears to have been mix. Lower convergence "high-foot" implosions are found to be less susceptible to mix, allowing velocities of >380 km/s to be achieved.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
Iterative methods used in overlap astrometric reduction techniques do not always converge
NASA Astrophysics Data System (ADS)
Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.
1993-04-01
In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.
Using HT and DT gamma rays to diagnose mix in Omega capsule implosions
NASA Astrophysics Data System (ADS)
Schmitt, M. J.; Herrmann, H. W.; Kim, Y. H.; McEvoy, A. M.; Zylstra, A.; Hammel, B. A.; Sepke, S. M.; Leatherland, A.; Gales, S.
2016-05-01
Experimental evidence [1] indicates that shell material can be driven into the core of Omega capsule implosions on the same time scale as the initial convergent shock. It has been hypothesized that shock-generated temperatures at the fuel/shell interface in thin exploding pusher capsules diffusively drives shell material into the gas core between the time of shock passage and bang time. We propose a method to temporally resolve and observe the evolution of shell material into the capsule core as a function of fuel/shell interface temperature (which can be varied by varying the capsule shell thickness). Our proposed method uses a CD plastic capsule filled with 50/50 HT gas and diagnosed using gas Cherenkov detection (GCD) to temporally resolve both the HT “clean” and DT “mix” gamma ray burn histories. Simulations using Hydra [2] for an Omega CD-lined capsule with a sub-micron layer of the inside surface of the shell pre-mixed into a fraction of the gas region produce gamma reaction history profiles that are sensitive to the depth to which this material is mixed. An experiment to observe these differences as a function of capsule shell thickness is proposed to determine if interface mixing is consistent with thermal diffusion λii∼T2/Z2ρ at the gas/shell interface. Since hydrodynamic mixing from shell perturbations, such as the mounting stalk and glue, could complicate these types of capsule-averaged temporal measurements, simulations including their effects also have been performed showing minimal perturbation of the hot spot geometry.
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
Mixing Efficiency in the Ocean.
Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E
2018-01-03
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Mixing Efficiency in the Ocean
NASA Astrophysics Data System (ADS)
Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.
2018-01-01
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Towards implementing coordinated healthy lifestyle promotion in primary care: a mixed method study.
Thomas, Kristin; Bendtsen, Preben; Krevers, Barbro
2015-01-01
Primary care is increasingly being encouraged to integrate healthy lifestyle promotion in routine care. However, implementation has been suboptimal. Coordinated care could facilitate lifestyle promotion practice but more empirical knowledge is needed about the implementation process of coordinated care initiatives. This study aimed to evaluate the implementation of a coordinated healthy lifestyle promotion initiative in a primary care setting. A mixed method, convergent, parallel design was used. Three primary care centres took part in a two-year research project. Data collection methods included individual interviews, document data and questionnaires. The General Theory of Implementation was used as a framework in the analysis to integrate the data sources. Multi-disciplinary teams were implemented in the centres although the role of the teams as a resource for coordinated lifestyle promotion was not fully embedded at the centres. Embedding of the teams was challenged by differences among the staff, patients and team members on resources, commitment, social norms and roles. The study highlights the importance of identifying and engaging key stakeholders early in an implementation process. The findings showed how the development phase influenced the implementation and embedding processes, which add aspects to the General Theory of Implementation.
Brief report: Bereaved parents informing research design: The place of a pilot study.
Donovan, L A; Wakefield, C E; Russell, V; Hetherington, Kate; Cohn, R J
2018-02-23
Risk minimization in research with bereaved parents is important. However, little is known about which research methods balance the sensitivity required for bereaved research participants and the need for generalizable results. To explore parental experiences of participating in mixed method bereavement research via a pilot study. A convergent parallel mixed method design assessing bereaved parents' experience of research participation. Eleven parents whose child was treated for cancer at The Royal Children's Hospital, Brisbane completed the questionnaire/interview being piloted (n = 8 mothers; n = 3 fathers; >6 months and <6 years bereaved). Of these, eight parents completed the pilot study evaluation questionnaire, providing feedback on their experience of participation. Participants acknowledged the importance of bereaved parents being central to research design and the development of bereavement programs. Sixty-three per cent (n = 5/8) of parents described completion of the questionnaire as 'not at all/a little bit' of a burden. Seventy-five per cent (n = 6/8) of parents opting into the telephone interview described participation as 'not at all/a little bit' of a burden. When considering the latest timeframes for participation in bereavement research 63% (n = 5/8) of parents indicated 'no endpoint.' Findings from the pilot study enabled important adjustments to be made to a large-scale future study. As a research method, pilot studies may be utilized to minimize harm and maximize the potential benefits for vulnerable research participants. A mixed method approach allows researchers to generalize findings to a broader population while also drawing on the depth of the lived experience.
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.
1986-01-01
A microcomputer code which displays 3-D oblique and 2-D plots of the temperature distribution downstream of jets mixing with a confined crossflow has been used to investigate the effects of varying the several independent flow and geometric parameters on the mixing. Temperature profiles calculated with this empirical model are presented to show the effects of orifice size and spacing, momentum flux ratio, density ratio, variable temperature mainstream, flow area convergence, orifice aspect ratio, and opposed and axially staged rows of jets.
Hommes, J; Van den Bossche, P; de Grave, W; Bos, G; Schuwirth, L; Scherpbier, A
2014-10-01
Little is known how time influences collaborative learning groups in medical education. Therefore a thorough exploration of the development of learning processes over time was undertaken in an undergraduate PBL curriculum over 18 months. A mixed-methods triangulation design was used. First, the quantitative study measured how various learning processes developed within and over three periods in the first 1,5 study years of an undergraduate curriculum. Next, a qualitative study using semi-structured individual interviews focused on detailed development of group processes driving collaborative learning during one period in seven tutorial groups. The hierarchic multilevel analyses of the quantitative data showed that a varying combination of group processes developed within and over the three observed periods. The qualitative study illustrated development in psychological safety, interdependence, potency, group learning behaviour, social and task cohesion. Two new processes emerged: 'transactive memory' and 'convergence in mental models'. The results indicate that groups are dynamic social systems with numerous contextual influences. Future research should thus include time as an important influence on collaborative learning. Practical implications are discussed.
Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng
2014-01-01
This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581
Hess, Rosanna F; Ross, Ratchneewan; GilillandJr, John L
2018-03-01
Relatively little is known about infertility and its consequences in Mali, West Africa where the context and culture are different from those of previously studied settings. This study therefore aimed to specifically examine infertility induced psychological distress and coping strategies among women in Mali. A convergent mixed-methods design-correlational cross-sectional and qualitative descriptive-guided the study. Fifty-eight infertile Malian women participated: 52 completed the Psychological Evaluation Test specific for infertility and a question on general health status, and 26 were interviewed in-depth. Over 20% scored above the cut-off point for psychological distress, and 48% described their general health as poor. There was no significant difference between women with primary vs. secondary infertility. The study found that infertile women lived with marital tensions, criticism from relatives, and stigmatization from the community. They experienced sadness, loneliness, and social deprivation. Coping strategies included traditional and biomedical treatments, religious faith and practices, and self-isolation. Health care professionals should provide holistic care for infertile women to meet their physical, spiritual, psychological, and social needs.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
Convergence acceleration of molecular dynamics methods for shocked materials using velocity scaling
NASA Astrophysics Data System (ADS)
Taylor, DeCarlos E.
2017-03-01
In this work, a convergence acceleration method applicable to extended system molecular dynamics techniques for shock simulations of materials is presented. The method uses velocity scaling to reduce the instantaneous value of the Rankine-Hugoniot conservation of energy constraint used in extended system molecular dynamics methods to more rapidly drive the system towards a converged Hugoniot state. When used in conjunction with the constant stress Hugoniostat method, the velocity scaled trajectories show faster convergence to the final Hugoniot state with little difference observed in the converged Hugoniot energy, pressure, volume and temperature. A derivation of the scale factor is presented and the performance of the technique is demonstrated using the boron carbide armour ceramic as a test material. It is shown that simulation of boron carbide Hugoniot states, from 5 to 20 GPa, using both a classical Tersoff potential and an ab initio density functional, are more rapidly convergent when the velocity scaling algorithm is applied. The accelerated convergence afforded by the current algorithm enables more rapid determination of Hugoniot states thus reducing the computational demand of such studies when using expensive ab initio or classical potentials.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow. Supplement
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J
1995-08-01
The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.
NASA Technical Reports Server (NTRS)
Sjogreen, Bjoern; Yee, H. C.
2007-01-01
Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742
Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter
2009-06-21
Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.
Rigon, Arianna; Reber, Justin; Patel, Nirav N; Duff, Melissa C
2018-06-08
While deficits in several cognitive domains following moderate-to-severe traumatic brain injury (TBI) have been well documented, little is known about the impact of TBI on creativity. In the current study, our goal is to determine whether convergent problem solving, which contributes to creative thinking, is impaired following TBI. We administered a test of convergent problem solving, the Remote Associate Task (RAT), as well as a battery of neuropsychological tests, to 29 individuals with TBI and 20 healthy comparisons. A mixed-effect regression analysis revealed that individuals with TBI were significantly less likely to produce a correct response, although on average they attempted to respond to the same number of items. Moreover, we found that the TBI (but not the comparison) group's performance on the RAT was significantly and positively associated with verbal learning and memory, providing further evidence supporting the association between declarative memory and creative convergent thinking. In summary, our findings reveal that convergent thinking can be compromised by moderate-to-severe TBI, furthering our understanding of the higher-level cognitive sequelae of TBI.
The role of hot spot mix in the low-foot and high-foot implosions on the NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T.; Patel, P. K.; Izumi, N.
Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the “low-foot” class of implosions appears to have been mix. As a result, lower convergence “high-foot” implosions are found to be lessmore » susceptible to mix, allowing velocities of >380 km/s to be achieved.« less
The role of hot spot mix in the low-foot and high-foot implosions on the NIF
Ma, T.; Patel, P. K.; Izumi, N.; ...
2017-05-18
Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the “low-foot” class of implosions appears to have been mix. As a result, lower convergence “high-foot” implosions are found to be lessmore » susceptible to mix, allowing velocities of >380 km/s to be achieved.« less
NASA Astrophysics Data System (ADS)
Roberts, C. D.; Palmer, M. D.; Allan, R. P.; Desbruyeres, D. G.; Hyder, P.; Liu, C.; Smith, D.
2017-01-01
We present an observation-based heat budget analysis for seasonal and interannual variations of ocean heat content (H) in the mixed layer (Hmld) and full-depth ocean (Htot). Surface heat flux and ocean heat content estimates are combined using a novel Kalman smoother-based method. Regional contributions from ocean heat transport convergences are inferred as a residual and the dominant drivers of Hmld and Htot are quantified for seasonal and interannual time scales. We find that non-Ekman ocean heat transport processes dominate Hmld variations in the equatorial oceans and regions of strong ocean currents and substantial eddy activity. In these locations, surface temperature anomalies generated by ocean dynamics result in turbulent flux anomalies that drive the overlying atmosphere. In addition, we find large regions of the Atlantic and Pacific oceans where heat transports combine with local air-sea fluxes to generate mixed layer temperature anomalies. In all locations, except regions of deep convection and water mass transformation, interannual variations in Htot are dominated by the internal rearrangement of heat by ocean dynamics rather than the loss or addition of heat at the surface. Our analysis suggests that, even in extratropical latitudes, initialization of ocean dynamical processes could be an important source of skill for interannual predictability of Hmld and Htot. Furthermore, we expect variations in Htot (and thus thermosteric sea level) to be more predictable than near surface temperature anomalies due to the increased importance of ocean heat transport processes for full-depth heat budgets.
NASA Astrophysics Data System (ADS)
Dawson, Joshua
A novel multi-mode implementation of a pulsed detonation engine, put forth by Wilson et al., consists of four modes; each specifically designed to capitalize on flow features unique to the various flow regimes. This design enables the propulsion system to generate thrust through the entire flow regime. The Multi-Mode Ejector-Augmented Pulsed Detonation Rocket Engine operates in mode one during take-off conditions through the acceleration to supersonic speeds. Once the mixing chamber internal flow exceeds supersonic speed, the propulsion system transitions to mode two. While operating in mode two, supersonic air is compressed in the mixing chamber by an upstream propagating detonation wave and then exhausted through the convergent-divergent nozzle. Once the velocity of the air flow within the mixing chamber exceeds the Chapman-Jouguet Mach number, the upstream propagating detonation wave no longer has sufficient energy to propagate upstream and consequently the propulsive system shifts to mode three. As a result of the inability of the detonation wave to propagate upstream, a steady oblique shock system is established just upstream of the convergent-divergent nozzle to initiate combustion. And finally, the propulsion system progresses on to mode four operation, consisting purely of a pulsed detonation rocket for high Mach number flight and use in the upper atmosphere as is needed for orbital insertion. Modes three and four appear to be a fairly significant challenge to implement, while the challenge of implementing modes one and two may prove to be a more practical goal in the near future. A vast number of potential applications exist for a propulsion system that would utilize modes one and two, namely a high Mach number hypersonic cruise vehicle. There is particular interest in the dynamics of mode one operation, which is the subject of this research paper. Several advantages can be obtained by use of this technology. Geometrically the propulsion system is fairly simple and as a result of the rapid combustion process the engine cycle is more efficient compared to its combined cycle counterparts. The flow path geometry consists of an inlet system, followed just downstream by a mixing chamber where an ejector structure is placed within the flow path. Downstream of the ejector structure is a duct leading to a convergent-divergent nozzle. During mode one operation and within the ejector, products from the detonation of a stoichiometric hydrogen/air mixture are exhausted directly into the surrounding secondary air stream. Mixing then occurs between both the primary and secondary flow streams, at which point the air mass containing the high pressure, high temperature reaction products is convected downstream towards the nozzle. The engine cycle is engineered to a specific number of detonations per second, creating the pulsating characteristic of the primary flow. The pulsing nature of the primary flow serves as a momentum augmentation, enhancing the thrust and specific impulse at low speeds. Consequently it is necessary to understand the transient mixing process between the primary and secondary flow streams occurring during mode one operation. Using OPENFOAMRTM, an analytic tool is developed to simulate the dynamics of the turbulent detonation process along with detailed chemistry in order to understand the physics involved with the stream interactions. The computational code has been developed within the framework of OPENFOAMRTM, an open-source alternative to commercial CFD software. A conservative formulation of the Farve averaged Navier-Stokes equations is implemented to facilitate programming and numerical stability. Time discretization is accomplished by using the Crank-Nicolson method, achieving second order convergence in time. Species mass fraction transport equations are implemented and a Seulex ODE solver was used to resolve the system of ordinary differential equations describing the hydrogen-air reaction mechanism detailed in Appendix A. The Seulex ODE solution algorithm is an extrapolation method based on the linearly implicit Euler method with step size control. A second order total variation diminishing method with a modified Sweby flux limiter was used for space discretization. And finally the use of operator splitting (PISO algorithm, and chemical kinetics) is essential due to the significant differences in characteristic time scales evolving simultaneously in turbulent reactive flow. Capturing the turbulent nature of the combustion process was done using the k-o-SST turbulence model, as formulated by Mentor [1]. Mentor's formulation is well suited to resolve the boundary layer while remaining relatively insensitive to freestream conditions, blending the merits of both the k-o and k-epsilon models. Further development of the tool is possible, most notably with the Numerical Propulsion System Simulation application. NPSS allows the user to take advantage of a "zooming" functionality in which high fidelity models of engine components can be integrated into NPSS models, allowing for a more robust propulsion system simulation.
Convergence analysis of a monotonic penalty method for American option pricing
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Xiaoqi; Teo, Kok Lay
2008-12-01
This paper is devoted to study the convergence analysis of a monotonic penalty method for pricing American options. A monotonic penalty method is first proposed to solve the complementarity problem arising from the valuation of American options, which produces a nonlinear degenerated parabolic PDE with Black-Scholes operator. Based on the variational theory, the solvability and convergence properties of this penalty approach are established in a proper infinite dimensional space. Moreover, the convergence rate of the combination of two power penalty functions is obtained.
Stenger, Kristen M; Ritter-Gooder, Paula K; Perry, Christina; Albrecht, Julie A
2014-12-01
Children are at a higher risk for foodborne illness. The objective of this study was to explore food safety knowledge, beliefs and practices among Hispanic families with young children (≤10 years of age) living within a Midwestern state. A convergent mixed methods design collected qualitative and quantitative data in parallel. Food safety knowledge surveys were administered (n = 90) prior to exploration of beliefs and practices among six focus groups (n = 52) conducted by bilingual interpreters in community sites in five cities/towns. Descriptive statistics determined knowledge scores and thematic coding unveiled beliefs and practices. Data sets were merged to assess concordance. Participants were female (96%), 35.7 (±7.6) years of age, from Mexico (69%), with the majority having a low education level. Food safety knowledge was low (56% ± 11). Focus group themes were: Ethnic dishes popular, Relating food to illness, Fresh food in home country, Food safety practices, and Face to face learning. Mixed method analysis revealed high self confidence in preparing food safely with low safe food handling knowledge and the presence of some cultural beliefs. On-site Spanish classes and materials were preferred venues for food safety education. Bilingual food safety messaging targeting common ethnic foods and cultural beliefs and practices is indicated to lower the risk of foodborne illness in Hispanic families with young children. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc
2018-05-01
Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Aerosol-cloud interactions in mixed-phase convective clouds - Part 1: Aerosol perturbations
NASA Astrophysics Data System (ADS)
Miltenberger, Annette K.; Field, Paul R.; Hill, Adrian A.; Rosenberg, Phil; Shipway, Ben J.; Wilkinson, Jonathan M.; Scovell, Robert; Blyth, Alan M.
2018-03-01
Changes induced by perturbed aerosol conditions in moderately deep mixed-phase convective clouds (cloud top height ˜ 5 km) developing along sea-breeze convergence lines are investigated with high-resolution numerical model simulations. The simulations utilise the newly developed Cloud-AeroSol Interacting Microphysics (CASIM) module for the Unified Model (UM), which allows for the representation of the two-way interaction between cloud and aerosol fields. Simulations are evaluated against observations collected during the COnvective Precipitation Experiment (COPE) field campaign over the southwestern peninsula of the UK in 2013. The simulations compare favourably with observed thermodynamic profiles, cloud base cloud droplet number concentrations (CDNC), cloud depth, and radar reflectivity statistics. Including the modification of aerosol fields by cloud microphysical processes improves the correspondence with observed CDNC values and spatial variability, but reduces the agreement with observations for average cloud size and cloud top height. Accumulated precipitation is suppressed for higher-aerosol conditions before clouds become organised along the sea-breeze convergence lines. Changes in precipitation are smaller in simulations with aerosol processing. The precipitation suppression is due to less efficient precipitation production by warm-phase microphysics, consistent with parcel model predictions. In contrast, after convective cells organise along the sea-breeze convergence zone, accumulated precipitation increases with aerosol concentrations. Condensate production increases with the aerosol concentrations due to higher vertical velocities in the convective cores and higher cloud top heights. However, for the highest-aerosol scenarios, no further increase in the condensate production occurs, as clouds grow into an upper-level stable layer. In these cases, the reduced precipitation efficiency (PE) dominates the precipitation response and no further precipitation enhancement occurs. Previous studies of deep convective clouds have related larger vertical velocities under high-aerosol conditions to enhanced latent heating from freezing. In the presented simulations changes in latent heating above the 0°C are negligible, but latent heating from condensation increases with aerosol concentrations. It is hypothesised that this increase is related to changes in the cloud field structure reducing the mixing of environmental air into the convective core. The precipitation response of the deeper mixed-phase clouds along well-established convergence lines can be the opposite of predictions from parcel models. This occurs when clouds interact with a pre-existing thermodynamic environment and cloud field structural changes occur that are not captured by simple parcel model approaches.
Chemical Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Aquino, T.; Dentz, M.
2017-12-01
Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
NASA Astrophysics Data System (ADS)
Wu, Chih-Ping; Lai, Wei-Wen
2015-04-01
The nonlocal Timoshenko beam theories (TBTs), based on the Reissner mixed variation theory (RMVT) and principle of virtual displacement (PVD), are derived for the free vibration analysis of a single-walled carbon nanotube (SWCNT) embedded in an elastic medium and with various boundary conditions. The strong formulations of the nonlocal TBTs are derived using Hamilton's principle, in which Eringen's nonlocal constitutive relations are used to account for the small-scale effect. The interaction between the SWCNT and its surrounding elastic medium is simulated using the Winkler and Pasternak foundation models. The frequency parameters of the embedded SWCNT are obtained using the differential quadrature (DQ) method. In the cases of the SWCNT without foundations, the results of RMVT- and PVD-based nonlocal TBTs converge rapidly, and their convergent solutions closely agree with the exact ones available in the literature. Because the highest order with regard to the derivatives of the field variables used in the RMVT-based nonlocal TBT is lower than that used in its PVD-based counterpart, the former is more efficient than the latter with regard to the execution time. The former is thus both faster and obtains more accurate solutions than the latter for the numerical analysis of the embedded SWCNT.
Advanced Energy Storage Management in Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu
2016-01-01
With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul
2015-12-01
In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less
Hansson, Kenth-Arne; Døving, Kjell B; Skjeldal, Frode M
2015-10-01
The consensus view of olfactory processing is that the axons of receptor-specific primary olfactory sensory neurons (OSNs) converge to a small subset of glomeruli, thus preserving the odour identity before the olfactory information is processed in higher brain centres. In the present study, we show that two different subsets of ciliated OSNs with different odorant specificities converge to the same glomeruli. In order to stain different ciliated OSNs in the crucian carp Carassius carassius we used two different chemical odorants, a bile salt and a purported alarm substance, together with fluorescent dextrans. The dye is transported within the axons and stains glomeruli in the olfactory bulb. Interestingly, the axons from the ciliated OSNs co-converge to the same glomeruli. Despite intermingled innervation of glomeruli, axons and terminal fields from the two different subsets of ciliated OSNs remained mono-coloured. By 4-6 days after staining, the dye was transported trans-synaptically to separately stained axons of relay neurons. These findings demonstrate that specificity of the primary neurons is retained in the olfactory pathways despite mixed innervation of the olfactory glomeruli. The results are discussed in relation to the emerging concepts about non-mammalian glomeruli. © 2015. Published by The Company of Biologists Ltd.
Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics
NASA Technical Reports Server (NTRS)
Seginer, A.; Rusak, Z.; Wasserstrom, E.
1983-01-01
Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.
NASA Technical Reports Server (NTRS)
Ehlers, E. F.
1974-01-01
A finite difference method for the solution of the transonic flow about a harmonically oscillating wing is presented. The partial differential equation for the unsteady transonic flow was linearized by dividing the flow into separate steady and unsteady perturbation velocity potentials and by assuming small amplitudes of harmonic oscillation. The resulting linear differential equation is of mixed type, being elliptic or hyperbolic whereever the steady flow equation is elliptic or hyperbolic. Central differences were used for all derivatives except at supersonic points where backward differencing was used for the streamwise direction. Detailed formulas and procedures are described in sufficient detail for programming on high speed computers. To test the method, the problem of the oscillating flap on a NACA 64A006 airfoil was programmed. The numerical procedure was found to be stable and convergent even in regions of local supersonic flow with shocks.
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
Combustor cap having non-round outlets for mixing tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Michael John; Boardman, Gregory Allen; McConnaughhay, Johnie Franklin
2016-12-27
A system includes a a combustor cap configured to be coupled to a plurality of mixing tubes of a multi-tube fuel nozzle, wherein each mixing tube of the plurality of mixing tubes is configured to mix air and fuel to form an air-fuel mixture. The combustor cap includes multiple nozzles integrated within the combustor cap. Each nozzle of the multiple nozzles is coupled to a respective mixing tube of the multiple mixing tubes. In addition, each nozzle of the multiple nozzles includes a first end and a second end. The first end is coupled to the respective mixing tube ofmore » the multiple mixing tubes. The second end defines a non-round outlet for the air-fuel mixture. Each nozzle of the multiple nozzles includes an inner surface having first and second portions, the first portion radially diverges along an axial direction from the first end to the second end, and the second portion radially converges along the axial direction from the first end to the second end.« less
Earl, David J; Deem, Michael W
2005-04-14
Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.
The convergence of Chinese county government health expenditures: capitation and contribution.
Zhang, Guoying; Zhang, Luwen; Wu, Shaolong; Xia, Xiaoqiong; Lu, Liming
2016-08-19
The disparity between government health expenditures across regions is more severe in developing countries than it is in developed countries. The capitation subsidy method has been proven effective in developed countries in reducing this disparity, but it has not been tested in China, the world's largest developing country. The convergence method of neoclassical economics was adopted to test the convergence of China's regional government health expenditure. Data were obtained from Provinces, Prefectures and Counties Fiscal Statistical Yearbook (2003-2007) edited by the Chinese Ministry of Finance, and published by the Chinese Finance & Economics Publishing House. The existence of σ-convergence and long-term and short-term β-convergence indicated the effectiveness of the capitation subsidy method in the New Rural Cooperative Medical Scheme on narrowing county government health expenditure disparities. The supply-side variables contributed the most to the county government health expenditure convergence, and factors contributing to convergence of county government health expenditures per capita were different in three regions. The narrowing disparity between county government health expenditures across regions supports the effectiveness of the capitation subsidy method adopted by China's New Rural Cooperative Scheme. However, subsidy policy still requires further improvement.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
Experimental study on a heavy-gas cylinder accelerated by cylindrical converging shock waves
NASA Astrophysics Data System (ADS)
Si, T.; Zhai, Z.; Luo, X.; Yang, J.
2014-01-01
The Richtmyer-Meshkov instability behavior of a heavy-gas cylinder accelerated by a cylindrical converging shock wave is studied experimentally. A curved wall profile is well-designed based on the shock dynamics theory [Phys. Fluids, 22: 041701 (2010)] with an incident planar shock Mach number of 1.2 and a converging angle of in a mm square cross-section shock tube. The cylinder mixed with the glycol droplets flows vertically through the test section and is illuminated horizontally by a laser sheet. The images obtained only one per run by an ICCD (intensified charge coupled device) combined with a pulsed Nd:YAG laser are first presented and the complete evolution process of the cylinder is then captured in a single test shot by a high-speed video camera combined with a high-power continuous laser. In this way, both the developments of the first counter-rotating vortex pair and the second counter-rotating vortex pair with an opposite rotating direction from the first one are observed. The experimental results indicate that the phenomena induced by the converging shock wave and the reflected shock formed from the center of convergence are distinct from those found in the planar shock case.
Zhong, Suyu; He, Yong; Gong, Gaolang
2015-05-01
Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
NASA Technical Reports Server (NTRS)
Koenig, R. W.; Fishbach, L. H.
1972-01-01
A computer program entitled GENENG employs component performance maps to perform analytical, steady state, engine cycle calculations. Through a scaling procedure, each of the component maps can be used to represent a family of maps (different design values of pressure ratios, efficiency, weight flow, etc.) Either convergent or convergent-divergent nozzles may be used. Included is a complete FORTRAN 4 listing of the program. Sample results and input explanations are shown for one-spool and two-spool turbojets and two-spool separate- and mixed-flow turbofans operating at design and off-design conditions.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
NASA Astrophysics Data System (ADS)
Hagmann, C.; Shaughnessy, D. A.; Moody, K. J.; Grant, P. M.; Gharibyan, N.; Gostic, J. M.; Wooddy, P. T.; Torretto, P. C.; Bandong, B. B.; Bionta, R.; Cerjan, C. J.; Bernstein, L. A.; Caggiano, J. A.; Herrmann, H. W.; Knauer, J. P.; Sayre, D. B.; Schneider, D. H.; Henry, E. A.; Fortner, R. J.
2015-07-01
A new radiochemical method for determining deuterium-tritium (DT) fuel and plastic ablator (CH) areal densities (ρR) in high-convergence, cryogenic inertial confinement fusion implosions at the National Ignition Facility is described. It is based on measuring the 198Au/196Au activation ratio using the collected post-shot debris of the Au hohlraum. The Au ratio combined with the independently measured neutron down scatter ratio uniquely determines the areal densities ρR(DT) and ρR(CH) during burn in the context of a simple 1-dimensional capsule model. The results show larger than expected ρR(CH) values, hinting at the presence of cold fuel-ablator mix.
Liang, Di; Zhang, Donglan; Huang, Jiayan; Schweitzer, Stuart
2016-01-01
China's rapid and sustained economic growth offers an opportunity to ask whether the advantages of growth diffuse throughout an economy, or remain localized in areas where the growth has been the greatest. A critical policy area in China has been the health system, and health inequality has become an issue that has led the government to broaden national health insurance programs. This study investigates whether health system resources and performance have converged over the past 30 years across China's 31 provinces. To examine geographic variation of health system resources and performance at the provincial level, we measure the degree of sigma convergence and beta convergence in indicators of health system resources (structure), health services utilization (process), and outcome. All data are from officially published sources: the China Health Statistics Year Book and the China Statistics Year Book. Sigma convergence is found for resource indicators, whereas it is not observed for either process or outcome indicators, indicating that disparities only narrowed in health system resources. Beta convergence is found in most indicators, except for 2 procedure indicators, reflecting that provinces with poorer resources were catching up. Convergence found in this study probably reflects the mixed outcome of government input, and market forces. Thus, left alone, the equitable distribution of health care resources may not occur naturally during a period of economic growth. Governmental and societal efforts are needed to reduce geographic health variation and promote health equity. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboredo, Fernando A.
The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less
NASA Astrophysics Data System (ADS)
Chakon, Ofir; Or, Yizhar
2017-08-01
Underactuated robotic locomotion systems are commonly represented by nonholonomic constraints where in mixed systems, these constraints are also combined with momentum evolution equations. Such systems have been analyzed in the literature by exploiting symmetries and utilizing advanced geometric methods. These works typically assume that the shape variables are directly controlled, and obtain the system's solutions only via numerical integration. In this work, we demonstrate utilization of the perturbation expansion method for analyzing a model example of mixed locomotion system—the twistcar toy vehicle, which is a variant of the well-studied roller-racer model. The system is investigated by assuming small-amplitude oscillatory inputs of either steering angle (kinematic) or steering torque (mechanical), and explicit expansions for the system's solutions under both types of actuation are obtained. These expressions enable analyzing the dependence of the system's dynamic behavior on the vehicle's structural parameters and actuation type. In particular, we study the reversal in direction of motion under steering angle oscillations about the unfolded configuration, as well as influence of the choice of actuation type on convergence properties of the motion. Some of the findings are demonstrated qualitatively by reporting preliminary motion experiments with a modular robotic prototype of the vehicle.
Gupta, Diksha; Singh, Bani
2014-01-01
The objective of this investigation is to analyze the effect of unsteadiness on the mixed convection boundary layer flow of micropolar fluid over a permeable shrinking sheet in the presence of viscous dissipation. At the sheet a variable distribution of suction is assumed. The unsteadiness in the flow and temperature fields is caused by the time dependence of the shrinking velocity and surface temperature. With the aid of similarity transformations, the governing partial differential equations are transformed into a set of nonlinear ordinary differential equations, which are solved numerically, using variational finite element method. The influence of important physical parameters, namely, suction parameter, unsteadiness parameter, buoyancy parameter and Eckert number on the velocity, microrotation, and temperature functions is investigated and analyzed with the help of their graphical representations. Additionally skin friction and the rate of heat transfer have also been computed. Under special conditions, an exact solution for the flow velocity is compared with the numerical results obtained by finite element method. An excellent agreement is observed for the two sets of solutions. Furthermore, to verify the convergence of numerical results, calculations are conducted with increasing number of elements. PMID:24672310
Golembiewski, Elizabeth; Watson, Dennis P.; Robison, Lisa; Coberg, John W.
2017-01-01
The positive relationship between social support and mental health has been well documented, but individuals experiencing chronic homelessness face serious disruptions to their social networks. Housing First (HF) programming has been shown to improve health and stability of formerly chronically homeless individuals. However, researchers are only just starting to understand the impact HF has on residents’ individual social integration. The purpose of the current study was to describe and understand changes in social networks of residents living in a HF program. Researchers employed a longitudinal, convergent parallel mixed method design, collecting quantitative social network data through structured interviews (n = 13) and qualitative data through semi-structured interviews (n = 20). Quantitative results demonstrated a reduction in network size over the course of one year. However, increases in both network density and frequency of contact with network members increased. Qualitative interviews demonstrated a strengthening in the quality of relationships with family and housing providers and a shedding of burdensome and abusive relationships. These results suggest network decay is a possible indicator of participants’ recovery process as they discontinued negative relationships and strengthened positive ones. PMID:28890807
Experiences of parents of children with special needs at school entry: a mixed method approach.
Siddiqua, A; Janus, M
2017-07-01
The transition from pre-school to kindergarten can be complex for children who need special assistance due to mental or physical disabilities (children with 'special needs'). We used a convergent mixed method approach to explore parents' experiences with service provision as their children transitioned to school. Parents (including one grandparent) of 37 children aged 4 to 6 years completed measures assessing their perceptions of and satisfaction with services. Semi-structured interviews were also conducted with 10 parents to understand their experience with services. Post transition, parents reported lower perceptions of services and decreased satisfaction than pre-transition. The following themes emerged from the qualitative data: qualities of services and service providers, communication and information transfer, parent advocacy, uncertainty about services, and contrasts and contradictions in satisfaction. The qualitative findings indicate that parents were both satisfied and concerned with aspects of the post-transition service provision. While the quantitative results suggested that parents' experience with services became less positive after their children entered school, the qualitative findings illustrated the variability in parents' experiences and components of service provision that require improvements to facilitate a successful school entry. © 2017 John Wiley & Sons Ltd.
Developing and investigating the use of single-item measures in organizational research.
Fisher, Gwenith G; Matthews, Russell A; Gibbons, Alyssa Mitchell
2016-01-01
The validity of organizational research relies on strong research methods, which include effective measurement of psychological constructs. The general consensus is that multiple item measures have better psychometric properties than single-item measures. However, due to practical constraints (e.g., survey length, respondent burden) there are situations in which certain single items may be useful for capturing information about constructs that might otherwise go unmeasured. We evaluated 37 items, including 18 newly developed items as well as 19 single items selected from existing multiple-item scales based on psychometric characteristics, to assess 18 constructs frequently measured in organizational and occupational health psychology research. We examined evidence of reliability; convergent, discriminant, and content validity assessments; and test-retest reliabilities at 1- and 3-month time lags for single-item measures using a multistage and multisource validation strategy across 3 studies, including data from N = 17 occupational health subject matter experts and N = 1,634 survey respondents across 2 samples. Items selected from existing scales generally demonstrated better internal consistency reliability and convergent validity, whereas these particular new items generally had higher levels of content validity. We offer recommendations regarding when use of single items may be more or less appropriate, as well as 11 items that seem acceptable, 14 items with mixed results that might be used with caution due to mixed results, and 12 items we do not recommend using as single-item measures. Although multiple-item measures are preferable from a psychometric standpoint, in some circumstances single-item measures can provide useful information. (c) 2016 APA, all rights reserved).
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
FV-MHMM: A Discussion on Weighting Schemes.
NASA Astrophysics Data System (ADS)
Franc, J.; Gerald, D.; Jeannin, L.; Egermann, P.; Masson, R.
2016-12-01
Upscaling or homogenization techniques consist in finding block-equivalentor equivalent upscaled properties on a coarse grid from heterogeneousproperties defined on an underlying fine grid. However, this couldbecome costly and resource consuming. Harder et al., 2013, have developeda Multiscale Hybrid-Mixed Method (MHMM) of upscaling to treat Darcytype equations on heterogeneous fields formulated using a finite elementmethod. Recently, Franc et al. 2016, has extended this method of upscalingto finite volume formulation (FV-MHMM). Although convergence refiningLagrange multipliers space has been observed, numerical artefactscan occur while trapping numerically the flow in regions of low permeability. This work will present the development of the method along with theresults obtained from its classical formulation. Then, two weightingschemes and their benefits on the FV-MHMM method will be presented insome simple random permeability cases. Next example will involve alarger heterogeneous 2D permeability field extracted from the 10thSPE test case. Eventually, multiphase flow will be addressed asan extension of this single phase flow method. An elliptic pressureequation solved on the coarse grid via FV-MHMM will be sequentiallycoupled with a hyperbolic saturation equation on the fine grid. Theimproved accuracy thanks to the weighting scheme will be measuredcompared to a finite volume fine grid solution. References: Harder, C., Paredes, D. and Valentin, F., A family of multiscalehybrid-mixed finite element methods for the Darcy equation with roughcoefficients, Journal of Computational Physics, 2013. Franc J., Debenest G., Jeannin L., Egermann P. and Masson R., FV-MHMMfor reservoir modelling ECMOR XV-15th European Conference on the Mathematicsof Oil Recovery, 2015.
ERIC Educational Resources Information Center
Busse, R. T.; Elliott, Stephen N.; Kratochwill, Thomas R.
2010-01-01
The purpose of this article is to present Convergent Evidence Scaling (CES) as an emergent method for combining data from multiple assessment indicators. The CES method combines single-case assessment data by converging data gathered across multiple persons, settings, or measures, thereby providing an overall criterion-referenced outcome on which…
A PROOF OF CONVERGENCE OF THE HORN AND SCHUNCK OPTICAL FLOW ALGORITHM IN ARBITRARY DIMENSION
LE TARNEC, LOUIS; DESTREMPES, FRANÇOIS; CLOUTIER, GUY; GARCIA, DAMIEN
2013-01-01
The Horn and Schunck (HS) method, which amounts to the Jacobi iterative scheme in the interior of the image, was one of the first optical flow algorithms. In this article, we prove the convergence of the HS method, whenever the problem is well-posed. Our result is shown in the framework of a generalization of the HS method in dimension n ≥ 1, with a broad definition of the discrete Laplacian. In this context, the condition for the convergence is that the intensity gradients are not all contained in a same hyperplane. Two other articles ([17] and [13]) claimed to solve this problem in the case n = 2, but it appears that both of these proofs are erroneous. Moreover, we explain why some standard results on the convergence of the Jacobi method do not apply for the HS problem, unless n = 1. It is also shown that the convergence of the HS scheme implies the convergence of the Gauss-Seidel and SOR schemes for the HS problem. PMID:26097625
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
NASA Astrophysics Data System (ADS)
Handayani, D.; Nuraini, N.; Tse, O.; Saragih, R.; Naiborhu, J.
2016-04-01
PSO is a computational optimization method motivated by the social behavior of organisms like bird flocking, fish schooling and human social relations. PSO is one of the most important swarm intelligence algorithms. In this study, we analyze the convergence of PSO when it is applied to with-in host dengue infection treatment model simulation in our early research. We used PSO method to construct the initial adjoin equation and to solve a control problem. Its properties of control input on the continuity of objective function and ability of adapting to the dynamic environment made us have to analyze the convergence of PSO. With the convergence analysis of PSO we will have some parameters that ensure the convergence result of numerical simulations on this model using PSO.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
The convergence of spectral methods for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1987-01-01
The convergence of the Fourier method for scalar nonlinear conservation laws which exhibit spontaneous shock discontinuities is discussed. Numerical tests indicate that the convergence may (and in fact in some cases must) fail, with or without post-processing of the numerical solution. Instead, a new kind of spectrally accurate vanishing viscosity is introduced to augment the Fourier approximation of such nonlinear conservation laws. Using compensated compactness arguments, it is shown that this spectral viscosity prevents oscillations, and convergence to the unique entropy solution follows.
A superlinear convergence estimate for an iterative method for the biharmonic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, M.A.
In [CDH] a method for the solution of boundary value problems for the biharmonic equation using conformal mapping was investigated. The method is an implementation of the classical method of Muskhelishvili. In [CDH] it was shown, using the Hankel structure, that the linear system in [Musk] is the discretization of the identify plus a compact operator, and therefore the conjugate gradient method will converge superlinearly. The purpose of this paper is to give an estimate of the superlinear convergence in the case when the boundary curve is in a Hoelder class.
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
Algorithms for accelerated convergence of adaptive PCA.
Chatterjee, C; Kang, Z; Roychowdhury, V P
2000-01-01
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Dixon, Brian E; Grannis, Shaun J; Revere, Debra
2013-10-30
Health information exchange (HIE) is the electronic sharing of data and information between clinical care and public health entities. Previous research has shown that using HIE to electronically report laboratory results to public health can improve surveillance practice, yet there has been little utilization of HIE for improving provider-based disease reporting. This article describes a study protocol that uses mixed methods to evaluate an intervention to electronically pre-populate provider-based notifiable disease case reporting forms with clinical, laboratory and patient data available through an operational HIE. The evaluation seeks to: (1) identify barriers and facilitators to implementation, adoption and utilization of the intervention; (2) measure impacts on workflow, provider awareness, and end-user satisfaction; and (3) describe the contextual factors that impact the effectiveness of the intervention within heterogeneous clinical settings and the HIE. The intervention will be implemented over a staggered schedule in one of the largest and oldest HIE infrastructures in the U.S., the Indiana Network for Patient Care. Evaluation will be conducted utilizing a concurrent design mixed methods framework in which qualitative methods are embedded within the quantitative methods. Quantitative data will include reporting rates, timeliness and burden and report completeness and accuracy, analyzed using interrupted time-series and other pre-post comparisons. Qualitative data regarding pre-post provider perceptions of report completeness, accuracy, and timeliness, reporting burden, data quality, benefits, utility, adoption, utilization and impact on reporting workflow will be collected using semi-structured interviews and open-ended survey items. Data will be triangulated to find convergence or agreement by cross-validating results to produce a contextualized portrayal of the facilitators and barriers to implementation and use of the intervention. By applying mixed research methods and measuring context, facilitators and barriers, and individual, organizational and data quality factors that may impact adoption and utilization of the intervention, we will document whether and how the intervention streamlines provider-based manual reporting workflows, lowers barriers to reporting, increases data completeness, improves reporting timeliness and captures a greater portion of communicable disease burden in the community.
Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf
2017-01-01
Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. PMID:28827239
Rossler, Kelly L; Kimble, Laura P
2016-01-01
Didactic lecture does not lend itself to teaching interprofessional collaboration. High-fidelity human patient simulation with a focus on clinical situations/scenarios is highly conducive to interprofessional education. Consequently, a need for research supporting the incorporation of interprofessional education with high-fidelity patient simulation based technology exists. The purpose of this study was to explore readiness for interprofessional learning and collaboration among pre-licensure health professions students participating in an interprofessional education human patient simulation experience. Using a mixed methods convergent parallel design, a sample of 53 pre-licensure health professions students enrolled in nursing, respiratory therapy, health administration, and physical therapy programs within a college of health professions participated in high-fidelity human patient simulation experiences. Perceptions of interprofessional learning and collaboration were measured with the revised Readiness for Interprofessional Learning Scale (RIPLS) and the Health Professional Collaboration Scale (HPCS). Focus groups were conducted during the simulation post-briefing to obtain qualitative data. Statistical analysis included non-parametric, inferential statistics. Qualitative data were analyzed using a phenomenological approach. Pre- and post-RIPLS demonstrated pre-licensure health professions students reported significantly more positive attitudes about readiness for interprofessional learning post-simulation in the areas of team work and collaboration, negative professional identity, and positive professional identity. Post-simulation HPCS revealed pre-licensure nursing and health administration groups reported greater health collaboration during simulation than physical therapy students. Qualitative analysis yielded three themes: "exposure to experiential learning," "acquisition of interactional relationships," and "presence of chronology in role preparation." Quantitative and qualitative data converged around the finding that physical therapy students had less positive perceptions of the experience because they viewed physical therapy practice as occurring one-on-one rather than in groups. Findings support that pre-licensure students are ready to engage in interprofessional education through exposure to an experiential format such as high-fidelity human patient simulation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael
2017-07-01
Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.
Measurement of LNAPL flow using single-well tracer dilution techniques.
Sale, Tom; Taylor, Geoffrey Ryan; Iltis, Gabriel; Lyverse, Mark
2007-01-01
This paper describes the use of single-well tracer dilution techniques to resolve the rate of light nonaqueous phase liquid (LNAPL) flow through wells and the adjacent geologic formation. Laboratory studies are presented in which a fluorescing tracer is added to LNAPL in wells. An in-well mixer keeps the tracer well mixed in the LNAPL. Tracer concentrations in LNAPL are measured through time using a fiber optic cable and a spectrometer. Results indicate that the rate of tracer depletion is proportional to the rate of LNAPL flow through the well and the adjacent formation. Tracer dilution methods are demonstrated for vertically averaged LNAPL Darcy velocities of 0.00048 to 0.11 m/d and LNAPL thicknesses of 9 to 24 cm. Over the range of conditions studied, results agree closely with steady-state LNAPL flow rates imposed by pumping. A key parameter for estimating LNAPL flow rates in the formation is the flow convergence factor alpha. Measured convergence factors for 0.030-inch wire wrap, 0.030-inch-slotted polyvinyl chloride (PVC), and 0.010-inch-slotted PVC are 1.7, 0.91, and 0.79, respectively. In addition, methods for using tracer dilution data to determine formation transmissivity to LNAPL are presented. Results suggest that single-well tracer dilution techniques are a viable approach for measuring in situ LNAPL flow and formation transmissivity to LNAPL.
Titze, Ingo R
2014-04-01
The origin of vocal registers has generally been attributed to differential activation of cricothyroid and thyroarytenoid muscles in the larynx. Register shifts, however, have also been shown to be affected by glottal pressures exerted on vocal fold surfaces, which can change with loudness, pitch, and vowel. Here it is shown computationally and with empirical data that intraglottal pressures can change abruptly when glottal adductory geometry is changed relatively smoothly from convergent to divergent. An intermediate shape between large convergence and large divergence, namely, a nearly rectangular glottal shape with almost parallel vocal fold surfaces, is associated with mixed registration. It can be less stable than either of the highly angular shapes unless transglottal pressure is reduced and upper stiffness of vocal fold tissues is balanced with lower stiffness. This intermediate state of adduction is desirable because it leads to a low phonation threshold pressure with moderate vocal fold collision. Achieving mixed registration consistently across wide ranges of F0, lung pressure, and vocal tract shapes appears to be a balancing act of coordinating laryngeal muscle activation with vocal tract pressures. Surprisingly, a large transglottal pressure is not facilitative in this process, exacerbating the bi-stable condition and the associated register contrast.
Bi-stable vocal fold adduction: A mechanism of modal-falsetto register shifts and mixed registration
Titze, Ingo R.
2014-01-01
The origin of vocal registers has generally been attributed to differential activation of cricothyroid and thyroarytenoid muscles in the larynx. Register shifts, however, have also been shown to be affected by glottal pressures exerted on vocal fold surfaces, which can change with loudness, pitch, and vowel. Here it is shown computationally and with empirical data that intraglottal pressures can change abruptly when glottal adductory geometry is changed relatively smoothly from convergent to divergent. An intermediate shape between large convergence and large divergence, namely, a nearly rectangular glottal shape with almost parallel vocal fold surfaces, is associated with mixed registration. It can be less stable than either of the highly angular shapes unless transglottal pressure is reduced and upper stiffness of vocal fold tissues is balanced with lower stiffness. This intermediate state of adduction is desirable because it leads to a low phonation threshold pressure with moderate vocal fold collision. Achieving mixed registration consistently across wide ranges of F0, lung pressure, and vocal tract shapes appears to be a balancing act of coordinating laryngeal muscle activation with vocal tract pressures. Surprisingly, a large transglottal pressure is not facilitative in this process, exacerbating the bi-stable condition and the associated register contrast. PMID:25235006
Scaled Heavy-Ball Acceleration of the Richardson-Lucy Algorithm for 3D Microscopy Image Restoration.
Wang, Hongbin; Miller, Paul C
2014-02-01
The Richardson-Lucy algorithm is one of the most important in image deconvolution. However, a drawback is its slow convergence. A significant acceleration was obtained using the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the image processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the heavy-ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has a proof of the convergence rate of O(K(-2)), where k is the number of iterations. We demonstrate the superior convergence performance, by a speedup factor of five, of the scaled H-B method on both synthetic and real 3D images.
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
On the Local Convergence of Pattern Search
NASA Technical Reports Server (NTRS)
Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.
CONVERGING SUPERGRANULAR FLOWS AND THE FORMATION OF CORONAL PLUMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y.-M.; Warren, H. P.; Muglach, K., E-mail: yi.wang@nrl.navy.mil, E-mail: harry.warren@nrl.navy.mil, E-mail: karin.muglach@nasa.gov
Earlier studies have suggested that coronal plumes are energized by magnetic reconnection between unipolar flux concentrations and nearby bipoles, even though magnetograms sometimes show very little minority-polarity flux near the footpoints of plumes. Here we use high-resolution extreme-ultraviolet (EUV) images and magnetograms from the Solar Dynamics Observatory (SDO) to clarify the relationship between plume emission and the underlying photospheric field. We find that plumes form where unipolar network elements inside coronal holes converge to form dense clumps, and fade as the clumps disperse again. The converging flows also carry internetwork fields of both polarities. Although the minority-polarity flux is sometimesmore » barely visible in the magnetograms, the corresponding EUV images almost invariably show loop-like features in the core of the plumes, with the fine structure changing on timescales of minutes or less. We conclude that the SDO observations are consistent with a model in which plume emission originates from interchange reconnection in converging flows, with the plume lifetime being determined by the ∼1 day evolutionary timescale of the supergranular network. Furthermore, the presence of large EUV bright points and/or ephemeral regions is not a necessary precondition for the formation of plumes, which can be energized even by the weak, mixed-polarity internetwork fields swept up by converging flows.« less
Work Restructuring in Post-Apartheid South Africa.
ERIC Educational Resources Information Center
Webster, Edward; Omar, Rahmat
2003-01-01
Case studies of South African companies (mining, manufacturing, and telephone call centers) reveal a mix of management strategies that converge with and diverge from past practices. South Africa is attempting to balance the demands of efficiency, employee rights, and racial equity, a challenge that requires overcoming the legacy of the apartheid…
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
NASA Astrophysics Data System (ADS)
Yarmohammadi, M.; Javadi, S.; Babolian, E.
2018-04-01
In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Analysis of Fuel Vaporization, Fuel-Air Mixing, and Combustion in Integrated Mixer-Flame Holders
NASA Technical Reports Server (NTRS)
Deur, J. M.; Cline, M. C.
2004-01-01
Requirements to limit pollutant emissions from the gas turbine engines for the future High-Speed Civil Transport (HSCT) have led to consideration of various low-emission combustor concepts. One such concept is the Integrated Mixer-Flame Holder (IMFH). This report describes a series of IMFH analyses performed with KIVA-II, a multi-dimensional CFD code for problems involving sprays, turbulence, and combustion. To meet the needs of this study, KIVA-II's boundary condition and chemistry treatments are modified. The study itself examines the relationships between fuel vaporization, fuel-air mixing, and combustion. Parameters being considered include: mixer tube diameter, mixer tube length, mixer tube geometry (converging-diverging versus straight walls), air inlet velocity, air inlet swirl angle, secondary air injection (dilution holes), fuel injection velocity, fuel injection angle, number of fuel injection ports, fuel spray cone angle, and fuel droplet size. Cases are run with and without combustion to examine the variations in fuel-air mixing and potential for flashback due to the above parameters. The degree of fuel-air mixing is judged by comparing average, minimum, and maximum fuel/air ratios at the exit of the mixer tube, while flame stability is monitored by following the location of the flame front as the solution progresses from ignition to steady state. Results indicate that fuel-air mixing can be enhanced by a variety of means, the best being a combination of air inlet swirl and a converging-diverging mixer tube geometry. With the IMFH configuration utilized in the present study, flashback becomes more common as the mixer tube diameter is increased and is instigated by disturbances associated with the dilution hole flow.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
NASA Technical Reports Server (NTRS)
Brand, J. C.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf
2017-08-21
The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Lagrangian particle method for compressible fluid dynamics
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
A mixed shear flexible finite element for the analysis of laminated plates
NASA Technical Reports Server (NTRS)
Putcha, N. S.; Reddy, J. N.
1984-01-01
A mixed shear flexible finite element based on the Hencky-Mindlin type shear deformation theory of laminated plates is presented and their behavior in bending is investigated. The element consists of three displacements, two rotations, and three moments as the generalized degrees of freedom per node. The numerical convergence and accuracy characteristics of the element are investigated by comparing the finite element solutions with the exact solutions. The present study shows that reduced-order integration of the stiffness coefficients due to shear is necessary to obtain accurate results for thin plates.
Control of wavepacket dynamics in mixed alkali metal clusters by optimally shaped fs pulses
NASA Astrophysics Data System (ADS)
Bartelt, A.; Minemoto, S.; Lupulescu, C.; Vajda, Š.; Wöste, L.
We have performed adaptive feedback optimization of phase-shaped femtosecond laser pulses to control the wavepacket dynamics of small mixed alkali-metal clusters. An optimization algorithm based on Evolutionary Strategies was used to maximize the ion intensities. The optimized pulses for NaK and Na2K converged to pulse trains consisting of numerous peaks. The timing of the elements of the pulse trains corresponds to integer and half integer numbers of the vibrational periods of the molecules, reflecting the wavepacket dynamics in their excited states.
Pattern and Process in the Comparative Study of Convergent Evolution.
Mahler, D Luke; Weber, Marjorie G; Wagner, Catherine E; Ingram, Travis
2017-08-01
Understanding processes that have shaped broad-scale biodiversity patterns is a fundamental goal in evolutionary biology. The development of phylogenetic comparative methods has yielded a tool kit for analyzing contemporary patterns by explicitly modeling processes of change in the past, providing neontologists tools for asking questions previously accessible only for select taxa via the fossil record or laboratory experimentation. The comparative approach, however, differs operationally from alternative approaches to studying convergence in that, for studies of only extant species, convergence must be inferred using evolutionary process models rather than being directly measured. As a result, investigation of evolutionary pattern and process cannot be decoupled in comparative studies of convergence, even though such a decoupling could in theory guard against adaptationist bias. Assumptions about evolutionary process underlying comparative tools can shape the inference of convergent pattern in sometimes profound ways and can color interpretation of such patterns. We discuss these issues and other limitations common to most phylogenetic comparative approaches and suggest ways that they can be avoided in practice. We conclude by promoting a multipronged approach to studying convergence that integrates comparative methods with complementary tests of evolutionary mechanisms and includes ecological and biogeographical perspectives. Carefully employed, the comparative method remains a powerful tool for enriching our understanding of convergence in macroevolution, especially for investigation of why convergence occurs in some settings but not others.
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
NASA Astrophysics Data System (ADS)
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batha, Steven H.; Fincke, James R.; Schmitt, Mark J.
2012-06-07
LANL has two projects in C10.2: Defect-Induced Mix Experiment (DIME) (ongoing, several runs at Omega; NIF shots this summer); and Shock/Shear (tested at Omega for two years; NIF shots in second half of FY13). Each project is jointly funded by C10.2, other C10 MTEs, and Science Campaigns. DIME is investigating 4{pi} and feature-induced mix in spherically convergent ICF implosions by using imaging of the mix layer. DIME prepared for NIF by demonstrating its PDD mix platform on Omega including imaging mid-Z doped layers and defects. DIME in FY13 will focus on PDD symmetry-dependent mix and moving burn into the mixmore » region for validation of mix/burn models. Re-Shock and Shear are two laser-driven experiments designed to study the turbulent mixing of materials. In FY-2012 43 shear and re-shock experimental shots were executed on the OMEGA laser and a complete time history obtained for both. The FY-2013 goal is to transition the experiment to NIF where the larger scale will provide a longer time period for mix layer growth.« less
Rayleigh-Taylor and Richtmyer-Meshkov instability induced flow, turbulence, and mixing. II
NASA Astrophysics Data System (ADS)
Zhou, Ye
2017-12-01
Rayleigh-Taylor (RT) and Richtmyer-Meshkov(RM) instabilities are well-known pathways towards turbulent mixing layers, in many cases characterized by significant mass and species exchange across the mixing layers (Zhou, 2017. Physics Reports, 720-722, 1-136). Mathematically, the pathway to turbulent mixing requires that the initial interface be multimodal, to permit cross-mode coupling leading to turbulence. Practically speaking, it is difficult to experimentally produce a non-multi-mode initial interface. Numerous methods and approaches have been developed to describe the late, multimodal, turbulent stages of RT and RM mixing layers. This paper first presents the initial condition dependence of RT mixing layers, and introduces parameters that are used to evaluate the level of "mixedness" and "mixed mass" within the layers, as well as the dependence on density differences, as well as the characteristic anisotropy of this acceleration-driven flow, emphasizing some of the key differences between the two-dimensional and three-dimensional RT mixing layers. Next, the RM mixing layers are discussed, and differences with the RT mixing layer are elucidated, including the RM mixing layers dependence on the Mach number of the initiating shock. Another key feature of the RM induced flows is its response to a reshock event, as frequently seen in shock-tube experiments as well as inertial confinement events. A number of approaches to modeling the evolution of these mixing layers are then described, in order of increasing complexity. These include simple buoyancy-drag models, Reynolds-averaged Navier-Stokes models of increased complexity, including K- ε, K-L, and K- L- a models, up to full Reynolds-stress models with more than one length-scale. Multifield models and multiphase models have also been implemented. Additional complexities to these flows are examined as well as modifications to the models to understand the effects of these complexities. These complexities include the presence of magnetic fields, compressibility, rotation, stratification and additional instabilities. The complications induced by the presence of converging geometries are also considered. Finally, the unique problems of astrophysical and high-energy-density applications, and efforts to model these are discussed.
NASA Astrophysics Data System (ADS)
Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu
2017-03-01
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
An Ensemble Approach in Converging Contents of LMS and KMS
ERIC Educational Resources Information Center
Sabitha, A. Sai; Mehrotra, Deepti; Bansal, Abhay
2017-01-01
Currently the challenges in e-Learning are converging the learning content from various sources and managing them within e-learning practices. Data mining learning algorithms can be used and the contents can be converged based on the Metadata of the objects. Ensemble methods use multiple learning algorithms and it can be used to converge the…
Sigilai, Antipa; Hassan, Amin S.; Thoya, Janet; Odhiambo, Rachael; Van de Vijver, Fons J. R.; Newton, Charles R. J. C.; Abubakar, Amina
2017-01-01
Background Despite bearing the largest HIV-related burden, little is known of the Health-Related Quality of Life (HRQoL) among people living with HIV in sub-Saharan Africa. One of the factors contributing to this gap in knowledge is the lack of culturally adapted and validated measures of HRQoL that are relevant for this setting. Aims We set out to adapt the Functional Assessment of HIV Infection (FAHI) Questionnaire, an HIV-specific measure of HRQoL, and evaluate its internal consistency and validity. Methods The three phase mixed-methods study took place in a rural setting at the Kenyan Coast. Phase one involved a scoping review to describe the evidence base of the reliability and validity of FAHI as well as the geographical contexts in which it has been administered. Phase two involved in-depth interviews (n = 38) to explore the content validity, and initial piloting for face validation of the adapted FAHI. Phase three was quantitative (n = 103) and evaluated the internal consistency, convergent and construct validities of the adapted interviewer-administered questionnaire. Results In the first phase of the study, we identified 16 studies that have used the FAHI. Most (82%) were conducted in North America. Only seven (44%) of the reviewed studies reported on the psychometric properties of the FAHI. In the second phase, most of the participants (37 out of 38) reported satisfaction with word clarity and content coverage whereas 34 (89%) reported satisfaction with relevance of the items, confirming the face validity of the adapted questionnaire during initial piloting. Our participants indicated that HIV impacted on their physical, functional, emotional, and social wellbeing. Their responses overlapped with items in four of the five subscales of the FAHI Questionnaire establishing its content validity. In the third phase, the internal consistency of the scale was found to be satisfactory with subscale Cronbach’s α ranging from 0.55 to 0.78. The construct and convergent validity of the tool were supported by acceptable factor loadings for most of the items on the respective sub-scales and confirmation of expected significant correlations of the FAHI subscale scores with scores of a measure of common mental disorders. Conclusion The adapted interviewer-administered Swahili version of FAHI questionnaire showed initial strong evidence of good psychometric properties with satisfactory internal consistency and acceptable validity (content, face, and convergent validity). It gives impetus for further validation work, especially construct validity, in similar settings before it can be used for research and clinical purposes in the entire East African region. PMID:28380073
Atomistic calculations of dislocation core energy in aluminium
Zhou, X. W.; Sills, R. B.; Ward, D. K.; ...
2017-02-16
A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less
Atomistic calculations of dislocation core energy in aluminium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, X. W.; Sills, R. B.; Ward, D. K.
A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less
Flow analysis and design optimization methods for nozzle-afterbody of a hypersonic vehicle
NASA Technical Reports Server (NTRS)
Baysal, O.
1992-01-01
This report summarizes the methods developed for the aerodynamic analysis and the shape optimization of the nozzle-afterbody section of a hypersonic vehicle. Initially, exhaust gases were assumed to be air. Internal-external flows around a single scramjet module were analyzed by solving the 3D Navier-Stokes equations. Then, exhaust gases were simulated by a cold mixture of Freon and Ar. Two different models were used to compute these multispecies flows as they mixed with the hypersonic airflow. Surface and off-surface properties were successfully compared with the experimental data. The Aerodynamic Design Optimization with Sensitivity analysis was then developed. Pre- and postoptimization sensitivity coefficients were derived and used in this quasi-analytical method. These coefficients were also used to predict inexpensively the flow field around a changed shape when the flow field of an unchanged shape was given. Starting with totally arbitrary initial afterbody shapes, independent computations were converged to the same optimum shape, which rendered the maximum axial thrust.
Flow analysis and design optimization methods for nozzle afterbody of a hypersonic vehicle
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1991-01-01
This report summarizes the methods developed for the aerodynamic analysis and the shape optimization of the nozzle-afterbody section of a hypersonic vehicle. Initially, exhaust gases were assumed to be air. Internal-external flows around a single scramjet module were analyzed by solving the three dimensional Navier-Stokes equations. Then, exhaust gases were simulated by a cold mixture of Freon and Argon. Two different models were used to compute these multispecies flows as they mixed with the hypersonic airflow. Surface and off-surface properties were successfully compared with the experimental data. In the second phase of this project, the Aerodynamic Design Optimization with Sensitivity analysis (ADOS) was developed. Pre and post optimization sensitivity coefficients were derived and used in this quasi-analytical method. These coefficients were also used to predict inexpensively the flow field around a changed shape when the flow field of an unchanged shape was given. Starting with totally arbitrary initial afterbody shapes, independent computations were converged to the same optimum shape, which rendered the maximum axial thrust.
Fictitious Domain Methods for Fracture Models in Elasticity.
NASA Astrophysics Data System (ADS)
Court, S.; Bodart, O.; Cayol, V.; Koko, J.
2014-12-01
As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.
The Mixed Finite Element Multigrid Method for Stokes Equations
Muzhinji, K.; Shateyi, S.; Motsa, S. S.
2015-01-01
The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Validity of three clinical performance assessments of internal medicine clerks.
Hull, A L; Hodder, S; Berger, B; Ginsberg, D; Lindheim, N; Quan, J; Kleinhenz, M E
1995-06-01
To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks. A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991. There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified. There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.
Nyongesa, Moses K; Sigilai, Antipa; Hassan, Amin S; Thoya, Janet; Odhiambo, Rachael; Van de Vijver, Fons J R; Newton, Charles R J C; Abubakar, Amina
2017-01-01
Despite bearing the largest HIV-related burden, little is known of the Health-Related Quality of Life (HRQoL) among people living with HIV in sub-Saharan Africa. One of the factors contributing to this gap in knowledge is the lack of culturally adapted and validated measures of HRQoL that are relevant for this setting. We set out to adapt the Functional Assessment of HIV Infection (FAHI) Questionnaire, an HIV-specific measure of HRQoL, and evaluate its internal consistency and validity. The three phase mixed-methods study took place in a rural setting at the Kenyan Coast. Phase one involved a scoping review to describe the evidence base of the reliability and validity of FAHI as well as the geographical contexts in which it has been administered. Phase two involved in-depth interviews (n = 38) to explore the content validity, and initial piloting for face validation of the adapted FAHI. Phase three was quantitative (n = 103) and evaluated the internal consistency, convergent and construct validities of the adapted interviewer-administered questionnaire. In the first phase of the study, we identified 16 studies that have used the FAHI. Most (82%) were conducted in North America. Only seven (44%) of the reviewed studies reported on the psychometric properties of the FAHI. In the second phase, most of the participants (37 out of 38) reported satisfaction with word clarity and content coverage whereas 34 (89%) reported satisfaction with relevance of the items, confirming the face validity of the adapted questionnaire during initial piloting. Our participants indicated that HIV impacted on their physical, functional, emotional, and social wellbeing. Their responses overlapped with items in four of the five subscales of the FAHI Questionnaire establishing its content validity. In the third phase, the internal consistency of the scale was found to be satisfactory with subscale Cronbach's α ranging from 0.55 to 0.78. The construct and convergent validity of the tool were supported by acceptable factor loadings for most of the items on the respective sub-scales and confirmation of expected significant correlations of the FAHI subscale scores with scores of a measure of common mental disorders. The adapted interviewer-administered Swahili version of FAHI questionnaire showed initial strong evidence of good psychometric properties with satisfactory internal consistency and acceptable validity (content, face, and convergent validity). It gives impetus for further validation work, especially construct validity, in similar settings before it can be used for research and clinical purposes in the entire East African region.
Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm.
Sun, Peng; Chang, Shengqian; Liu, Siqi; Tao, Xiao; Wang, Chang; Zheng, Zhenrong
2018-04-16
In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
Decoding Mixed Signals: Survival in the Demise of Affirmative Action.
ERIC Educational Resources Information Center
Neff, Heather
Among personal memories for one minority instructor in literature is witnessing the civil rights movement, that defining period in which people of African descent broke out of the chrysalis of "Jim Crow" and transformed themselves from "colored" to "Black." In 1995, 1,000,000 Black men once again converged on the…
ERIC Educational Resources Information Center
Webb, Christian A.; Schwab, Zachary J.; Weber, Mareen; DelDonno, Sophie; Kipman, Maia; Weiner, Melissa R.; Killgore, William D. S.
2013-01-01
The construct of emotional intelligence (EI) has garnered increased attention in the popular media and scientific literature. Several competing measures of EI have been developed, including self-report and performance-based instruments. The current study replicates and expands on previous research by examining three competing EI measures…
Development of Convergence Nanoparticles for Multi-Modal Bio-Medical Imaging
2008-09-18
Synthesized nanoparticles (1 mg /ml ( Mn +Fe)) are mixed with cancer cell (MCF7) and heat generation efficacy was measured with the cell viability under...fabrication of MnFe2O4 which has superior magnetic property compared to other types of metal ferrites . Figure 1. Magnetic nanoparticle for disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gariboldi, C.; E-mail: cgariboldi@exa.unrc.edu.ar; Tarzia, D.
2003-05-21
We consider a steady-state heat conduction problem P{sub {alpha}} with mixed boundary conditions for the Poisson equation depending on a positive parameter {alpha} , which represents the heat transfer coefficient on a portion {gamma} {sub 1} of the boundary of a given bounded domain in R{sup n} . We formulate distributed optimal control problems over the internal energy g for each {alpha}. We prove that the optimal control g{sub o}p{sub {alpha}} and its corresponding system u{sub go}p{sub {alpha}}{sub {alpha}} and adjoint p{sub go}p{sub {alpha}}{sub {alpha}} states for each {alpha} are strongly convergent to g{sub op},u{sub gop} and p{sub gop} ,more » respectively, in adequate functional spaces. We also prove that these limit functions are respectively the optimal control, and the system and adjoint states corresponding to another distributed optimal control problem for the same Poisson equation with a different boundary condition on the portion {gamma}{sub 1} . We use the fixed point and elliptic variational inequality theories.« less
Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2014-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose two decision mechanisms that can produce the slope effect, and we test them in three experiments. The evidence mixing account assumes that people change how they weight item versus source evidence based on which source is stronger, and the converging criteria account assumes that participants become more willing to make high confidence source responses for test probes that have higher item strength. Results failed to support the evidence mixing account, in that the slope effect emerged even when item evidence was not informative for the source judgment (that is, in tests that included strong and weak items from both sources). In contrast, results showed strong support for the converging criteria account. This account not only accommodated the unequal-strength slope effect, but also made a prediction for unstudied (new) items that was empirically confirmed: participants made more high confidence source responses for new items when they were more confident that the item was studied. The converging criteria account has an advantage over accounts based on source recollection or evidence variability, as the latter accounts do not predict the relationship between recognition and source confidence for new items. PMID:23565789
Retail Food Store Access in Rural Appalachia: A Mixed Methods Study.
Thatcher, Esther; Johnson, Cassandra; Zenk, Shannon N; Kulbok, Pamela
2017-05-01
To describe how characteristics of food retail stores (potential access) and other factors influence self-reported food shopping behavior (realized food access) among low-income, rural Central Appalachian women. Cross-sectional descriptive. Potential access was assessed through store mapping and in-store food audits. Factors influencing consumers' realized access were assessed through in-depth interviews. Results were merged using a convergent parallel mixed methods approach. Food stores (n = 50) and adult women (n = 9) in a rural Central Appalachian county. Potential and realized food access were described across five dimensions: availability, accessibility, affordability, acceptability, and accommodation. Supermarkets had better availability of healthful foods, followed by grocery stores, dollar stores, and convenience stores. On average, participants lived within 10 miles of 3.9 supermarkets or grocery stores, and traveled 7.5 miles for major food shopping. Participants generally shopped at the closest store that met their expectations for food availability, price, service, and atmosphere. Participants' perceptions of stores diverged from each other and from in-store audit findings. Findings from this study can help public health nurses engage with communities to make affordable, healthy foods more accessible. Recommendations are made for educating low-income consumers and partnering with food stores. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Roncke, Nancy
This formative, convergent-mixed methods research study investigated the impact of Socratic Seminars on eighth grade science students' independent comprehension of science texts. The study also highlighted how eighth grade students of varying reading abilities interacted with and comprehended science texts differently during and after the use of Socratic Seminars. In order to document any changes in the students' overall comprehension of science texts, this study compared the experimental and control groups' pre- and post-test performances on the Content Area Reading Assessment (Leslie & Caldwell, 2014) and self-perception surveys on students' scientific reading engagement. Student think-alouds and interviews also captured the students' evolving understandings of the science texts. At the conclusion of this sixteen-week study, the achievement gap between the experimental and control group was closed in five of the seven categories on the Content Area Reading Assessment, including supporting an inference with textual evidence, determining central ideas, explaining why or how, determining word meaning, and summarizing a science text. Students' self-perception surveys were more positive regarding reading science texts after the Socratic Seminars. Finally, the student think-alouds revealed that some students moved from a literal interpretation of the science texts to inquiries that questioned the text and world events.
Stewart, Jennifer M
2014-01-01
To assess the barriers and facilitators to using African American churches as sites for implementation of evidence-based HIV interventions among young African American women. Mixed methods cross-sectional design. African American churches in Philadelphia, PA. 142 African American pastors, church leaders, and young adult women ages 18 to 25. Mixed methods convergent parallel design. The majority of young adult women reported engaging in high-risk HIV-related behaviors. Although church leaders reported willingness to implement HIV risk-reduction interventions, they were unsure of how to initiate this process. Key facilitators to the implementation of evidence-based interventions included the perception of the leadership and church members that HIV interventions were needed and that the church was a promising venue for them. A primary barrier to implementation in this setting is the perception that discussions of sexuality should be private. Implementation of evidence-based HIV interventions for young adult African American women in church settings is feasible and needed. Building a level of comfort in discussing matters of sexuality and adapting existing evidence-based interventions to meet the needs of young women in church settings is a viable approach for successful implementation. © 2014 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.
Dimensions of Posttraumatic Growth in Patients With Cancer: A Mixed Method Study.
Heidarzadeh, Mehdi; Rassouli, Maryam; Brant, Jeannine M; Mohammadi-Shahbolaghi, Farahnaz; Alavi-Majd, Hamid
2017-08-12
Posttraumatic growth (PTG) refers to positive outcomes after exposure to stressful events. Previous studies suggest cross-cultural differences in the nature and amount of PTG. The aim of this study was to explore different dimensions of PTG in Iranian patients with cancer. A mixed method study with convergent parallel design was applied to clarify and determine dimensions of PTG. Using the Posttraumatic Growth Inventory (PTGI), confirmatory factor analysis was used to quantitatively identify dimensions of PTG in 402 patients with cancer. Simultaneously, phenomenological methodology (in-depth interview with 12 patients) was used to describe and interpret the lived experiences of cancer patients in the qualitative part of the study. Five dimensions of PTGI were confirmed from the original PTGI. Qualitatively, new dimensions of PTG emerged including "inner peace and other positive personal attributes," "finding meaning of life," "being a role model," and "performing health promoting behaviors." Results of the study indicated that PTG is a 5-dimensional concept with a broad range of subthemes for Iranian cancer patients and that the PTGI did not reflect all growth dimensions in Iranian cancer patients. Awareness of PTG dimensions can enable nurses to guide their use as coping strategies and provide context for positive changes in patients to promote quality care.
Numerical Method for Darcy Flow Derived Using Discrete Exterior Calculus
NASA Astrophysics Data System (ADS)
Hirani, A. N.; Nakshatrala, K. B.; Chaudhry, J. H.
2015-05-01
We derive a numerical method for Darcy flow, and also for Poisson's equation in mixed (first order) form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is one of its discretizations on simplicial complexes such as triangle and tetrahedral meshes. DEC is a coordinate invariant discretization, in that it does not depend on the embedding of the simplices or the whole mesh. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for a spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solutions in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. We also show numerical evidence of convergence of the flux and the pressure. A convergence experiment is included for Darcy flow on a surface. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this article. We also include a discussion of the boundary condition in terms of exterior calculus.
A new look at the robust control of discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Todorov, M. G.; Fragoso, M. D.
2016-03-01
In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1995-01-01
Underintegrated methods are investigated with respect to their stability and convergence properties. The focus was on identifying regions where they work and regions where techniques such as hourglass viscosity and hourglass control can be used. Results obtained show that underintegrated methods typically lead to finite element stiffness with spurious modes in the solution. However, problems exist (scalar elliptic boundary value problems) where underintegrated with hourglass control yield convergent solutions. Also, stress averaging in underintegrated stiffness calculations does not necessarily lead to stable or convergent stress states.
Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers
Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray
2014-01-01
We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843
Genome-Wide Convergence during Evolution of Mangroves from Woody Plants.
Xu, Shaohua; He, Ziwen; Guo, Zixiao; Zhang, Zhang; Wyckoff, Gerald J; Greenberg, Anthony; Wu, Chung-I; Shi, Suhua
2017-04-01
When living organisms independently invade a new environment, the evolution of similar phenotypic traits is often observed. An interesting but contentious issue is whether the underlying molecular biology also converges in the new habitat. Independent invasions of tropical intertidal zones by woody plants, collectively referred to as mangrove trees, represent some dramatic examples. The high salinity, hypoxia, and other stressors in the new habitat might have affected both genomic features and protein structures. Here, we developed a new method for detecting convergence at conservative Sites (CCS) and applied it to the genomic sequences of mangroves. In simulations, the CCS method drastically reduces random convergence at rapidly evolving sites as well as falsely inferred convergence caused by the misinferences of the ancestral character. In mangrove genomes, we estimated ∼400 genes that have experienced convergence over the background level of convergence in the nonmangrove relatives. The convergent genes are enriched in pathways related to stress response and embryo development, which could be important for mangroves' adaptation to the new habitat. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multigrid methods for isogeometric discretization
Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.
2013-01-01
We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Kadarmideen, Haja N; Janss, Luc L G
2005-11-01
Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.
How hot? Systematic convergence of the replica exchange method using multiple reservoirs.
Ruscio, Jory Z; Fawzi, Nicolas L; Head-Gordon, Teresa
2010-02-01
We have devised a systematic approach to converge a replica exchange molecular dynamics simulation by dividing the full temperature range into a series of higher temperature reservoirs and a finite number of lower temperature subreplicas. A defined highest temperature reservoir of equilibrium conformations is used to help converge a lower but still hot temperature subreplica, which in turn serves as the high-temperature reservoir for the next set of lower temperature subreplicas. The process is continued until an optimal temperature reservoir is reached to converge the simulation at the target temperature. This gradual convergence of subreplicas allows for better and faster convergence at the temperature of interest and all intermediate temperatures for thermodynamic analysis, as well as optimizing the use of multiple processors. We illustrate the overall effectiveness of our multiple reservoir replica exchange strategy by comparing sampling and computational efficiency with respect to replica exchange, as well as comparing methods when converging the structural ensemble of the disordered Abeta(21-30) peptide simulated with explicit water by comparing calculated Rotating Overhauser Effect Spectroscopy intensities to experimentally measured values. Copyright 2009 Wiley Periodicals, Inc.
Use of Picard and Newton iteration for solving nonlinear ground water flow equations
Mehl, S.
2006-01-01
This study examines the use of Picard and Newton iteration to solve the nonlinear, saturated ground water flow equation. Here, a simple three-node problem is used to demonstrate the convergence difficulties that can arise when solving the nonlinear, saturated ground water flow equation in both homogeneous and heterogeneous systems with and without nonlinear boundary conditions. For these cases, the characteristic types of convergence patterns are examined. Viewing these convergence patterns as orbits of an attractor in a dynamical system provides further insight. It is shown that the nonlinearity that arises from nonlinear head-dependent boundary conditions can cause more convergence difficulties than the nonlinearity that arises from flow in an unconfined aquifer. Furthermore, the effects of damping on both convergence and convergence rate are investigated. It is shown that no single strategy is effective for all problems and how understanding pitfalls and merits of several methods can be helpful in overcoming convergence difficulties. Results show that Picard iterations can be a simple and effective method for the solution of nonlinear, saturated ground water flow problems.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
Simultaneous mixing and pumping using asymmetric microelectrodes
NASA Astrophysics Data System (ADS)
Kim, Byoung Jae; Yoon, Sang Youl; Sung, Hyung Jin; Smith, Charles G.
2007-10-01
This study proposes ideas for simultaneous mixing and pumping using asymmetric microelectrode arrays. The driving force of the mixing and pumping was based on electroosmotic flows induced by alternating current (ac) electric fields on asymmetric microelectrodes. The key idea was to bend/incline the microelectrodes like diagonal/herringbone shapes. Four patterns of the asymmetric electrode arrays were considered depending on the shape of electrode arrays. For the diagonal shape, repeated and staggered patterns of the electrode arrays were studied. For the herringbone shape, diverging and converging patterns were examined. These microelectrode patterns forced fluid flows in the lateral direction leading to mixing and in the channel direction leading to pumping. Three-dimensional numerical simulations were carried out using the linear theories of ac electro-osmosis. The performances of the mixing and pumping were assessed in terms of the mixing efficiency and the pumping flow rate. The results indicated that the helical flow motions induced by the electrode arrays play a significant role in the mixing enhancement. The pumping performance was influenced by the slip velocity at the center region of the channel compared to that near the side walls.
Seasonal Mixed Layer Heat Budget in the Southeast Tropical Atlantic
NASA Astrophysics Data System (ADS)
Scannell, H. A.; McPhaden, M. J.
2016-12-01
We analyze a mixed layer heat budget at 6ºS, 8ºE from a moored buoy of the Prediction and Research Moored Array in the Atlantic (PIRATA) to better understand the causes of seasonal mixed layer temperature variability in the southeast tropical Atlantic. This region is of interest because it is susceptible to warm biases in coupled global climate models and has historically been poorly sampled. Previous work suggests that thermodynamic changes in both latent heat loss and absorbed solar radiation dominate mixed layer properties away from the equator in the tropical Atlantic, while advection and entrainment are more important near the equator. Changes in mixed layer salinity can also influence temperature through the formation of barrier layers and density gradients. Freshwater flux from the Congo River, migration of the Intertropical Convergence Zone and advection of water masses are considered important contributors to mixed layer salinity variability in our study region. We analyze ocean temperature, salinity and meteorological data beginning in 2013 using mooring, Argo, and satellite platforms to study how seasonal temperature variability in the mixed layer is influenced by air-sea interactions and ocean dynamics.
Lagrangian particle method for compressible fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Lagrangian particle method for compressible fluid dynamics
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
2018-02-09
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
Construction, classification and parametrization of complex Hadamard matrices
NASA Astrophysics Data System (ADS)
Szöllősi, Ferenc
To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.
Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows
NASA Astrophysics Data System (ADS)
Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2017-11-01
In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.
Convergence of Transition Probability Matrix in CLVMarkov Models
NASA Astrophysics Data System (ADS)
Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.
2018-04-01
A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.
NASA Technical Reports Server (NTRS)
Zeleznik, Frank J.; Gordon, Sanford
1960-01-01
The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Quantification provides a conceptual basis for convergent evolution.
Speed, Michael P; Arbuckle, Kevin
2017-05-01
While much of evolutionary biology attempts to explain the processes of diversification, there is an important place for the study of phenotypic similarity across life forms. When similar phenotypes evolve independently in different lineages this is referred to as convergent evolution. Although long recognised, evolutionary convergence is receiving a resurgence of interest. This is in part because new genomic data sets allow detailed and tractable analysis of the genetic underpinnings of convergent phenotypes, and in part because of renewed recognition that convergence may reflect limitations in the diversification of life. In this review we propose that although convergent evolution itself does not require a new evolutionary framework, none the less there is room to generate a more systematic approach which will enable evaluation of the importance of convergent phenotypes in limiting the diversity of life's forms. We therefore propose that quantification of the frequency and strength of convergence, rather than simply identifying cases of convergence, should be considered central to its systematic comprehension. We provide a non-technical review of existing methods that could be used to measure evolutionary convergence, bringing together a wide range of methods. We then argue that quantification also requires clear specification of the level at which the phenotype is being considered, and argue that the most constrained examples of convergence show similarity both in function and in several layers of underlying form. Finally, we argue that the most important and impressive examples of convergence are those that pertain, in form and function, across a wide diversity of selective contexts as these persist in the likely presence of different selection pressures within the environment. © 2016 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Darcie, Thomas E.; Doverspike, Robert; Zirngibl, Martin; Korotky, Steven K.
2005-08-01
Call for Papers: Convergence The Journal of Optical Networking (JON) invites submissions to a special issue on Convergence. Convergence has become a popular theme in telecommunications, one that has broad implications across all segments of the industry. Continual evolution of technology and applications continues to erase lines between traditionally separate lines of business, with dramatic consequences for vendors, service providers, and consumers. Spectacular advances in all layers of optical networking-leading to abundant, dynamic, cost-effective, and reliable wide-area and local-area connections-have been essential drivers of this evolution. As services and networks continue to evolve towards some notion of convergence, the continued role of optical networks must be explored. One vision of convergence renders all information in a common packet (especially IP) format. This vision is driven by the proliferation of data services. For example, time-division multiplexed (TDM) voice becomes VoIP. Analog cable-television signals become MPEG bits streamed to digital set-top boxes. T1 or OC-N private lines migrate to Ethernet virtual private networks (VPNs). All these packets coexist peacefully within a single packet-routing methodology built on an optical transport layer that combines the flexibility and cost of data networks with telecom-grade reliability. While this vision is appealing in its simplicity and shared widely, specifics of implementation raise many challenges and differences of opinion. For example, many seek to expand the role of Ethernet in these transport networks, while massive efforts are underway to make traditional TDM networks more data friendly within an evolved but backward-compatible SDH/SONET (synchronous digital hierarchy and synchronous optical network) multiplexing hierarchy. From this common underlying theme follow many specific instantiations. Examples include the convergence at the physical, logical, and operational levels of voice and data, video and data, private-line and virtual private-line, fixed and mobile, and local and long-haul services. These trends have many consequences for consumers, vendors, and carriers. Faced with large volumes of low-margin data traffic mixed with traditional voice services, the need for capital conservation and operational efficiency drives carriers away from today's separate overlay networks for each service and towards "converged" platforms. For example, cable operators require transport of multiple services over both hybrid fiber coax (HFC) and DWDM transport technologies. Local carriers seek an economical architecture to deliver integrated services on optically enabled broadband-access networks. Services over wireless-access networks must coexist with those from wired networks. In each case, convergence of networks and services inspires an important set of questions and challenges, driven by the need for low cost, operational efficiency, service performance requirements, and optical transport technology options. This Feature Issue explores the various interpretations and implications of network convergence pertinent to optical networking. How does convergence affect the evolution of optical transport-layer and control approaches? Are the implied directions consistent with research vision for optical networks? Substantial challenges remain. Papers are solicited across the broad spectrum of interests. These include, but are not limited to: Architecture, design and performance of optical wide-area-network (WAN), metro, and access networks Integration strategies for multiservice transport platforms Access methods that bridge traditional and emerging services Network signaling and control methodologies All-optical packet routing and switching techniques
NASA Astrophysics Data System (ADS)
Darcie, Thomas E.; Doverspike, Robert; Zirngibl, Martin; Korotky, Steven K.
2005-06-01
Call for Papers: Convergence The Journal of Optical Networking (JON) invites submissions to a special issue on Convergence. Convergence has become a popular theme in telecommunications, one that has broad implications across all segments of the industry. Continual evolution of technology and applications continues to erase lines between traditionally separate lines of business, with dramatic consequences for vendors, service providers, and consumers. Spectacular advances in all layers of optical networking-leading to abundant, dynamic, cost-effective, and reliable wide-area and local-area connections-have been essential drivers of this evolution. As services and networks continue to evolve towards some notion of convergence, the continued role of optical networks must be explored. One vision of convergence renders all information in a common packet (especially IP) format. This vision is driven by the proliferation of data services. For example, time-division multiplexed (TDM) voice becomes VoIP. Analog cable-television signals become MPEG bits streamed to digital set-top boxes. T1 or OC-N private lines migrate to Ethernet virtual private networks (VPNs). All these packets coexist peacefully within a single packet-routing methodology built on an optical transport layer that combines the flexibility and cost of data networks with telecom-grade reliability. While this vision is appealing in its simplicity and shared widely, specifics of implementation raise many challenges and differences of opinion. For example, many seek to expand the role of Ethernet in these transport networks, while massive efforts are underway to make traditional TDM networks more data friendly within an evolved but backward-compatible SDH/SONET (synchronous digital hierarchy and synchronous optical network) multiplexing hierarchy. From this common underlying theme follow many specific instantiations. Examples include the convergence at the physical, logical, and operational levels of voice and data, video and data, private-line and virtual private-line, fixed and mobile, and local and long-haul services. These trends have many consequences for consumers, vendors, and carriers. Faced with large volumes of low-margin data traffic mixed with traditional voice services, the need for capital conservation and operational efficiency drives carriers away from today's separate overlay networks for each service and towards "converged" platforms. For example, cable operators require transport of multiple services over both hybrid fiber coax (HFC) and DWDM transport technologies. Local carriers seek an economical architecture to deliver integrated services on optically enabled broadband-access networks. Services over wireless-access networks must coexist with those from wired networks. In each case, convergence of networks and services inspires an important set of questions and challenges, driven by the need for low cost, operational efficiency, service performance requirements, and optical transport technology options. This Feature Issue explores the various interpretations and implications of network convergence pertinent to optical networking. How does convergence affect the evolution of optical transport-layer and control approaches? Are the implied directions consistent with research vision for optical networks? Substantial challenges remain. Papers are solicited across the broad spectrum of interests. These include, but are not limited to: Architecture, design and performance of optical wide-area-network (WAN), metro, and access networks Integration strategies for multiservice transport platforms Access methods that bridge traditional and emerging services Network signaling and control methodologies All-optical packet routing and switching techniques
NASA Astrophysics Data System (ADS)
Darcie, Thomas E.; Doverspike, Robert; Zirngibl, Martin; Korotky, Steven K.
2005-05-01
Call for Papers: Convergence The Journal of Optical Networking (JON) invites submissions to a special issue on Convergence. Convergence has become a popular theme in telecommunications, one that has broad implications across all segments of the industry. Continual evolution of technology and applications continues to erase lines between traditionally separate lines of business, with dramatic consequences for vendors, service providers, and consumers. Spectacular advances in all layers of optical networking-leading to abundant, dynamic, cost-effective, and reliable wide-area and local-area connections-have been essential drivers of this evolution. As services and networks continue to evolve towards some notion of convergence, the continued role of optical networks must be explored. One vision of convergence renders all information in a common packet (especially IP) format. This vision is driven by the proliferation of data services. For example, time-division multiplexed (TDM) voice becomes VoIP. Analog cable-television signals become MPEG bits streamed to digital set-top boxes. T1 or OC-N private lines migrate to Ethernet virtual private networks (VPNs). All these packets coexist peacefully within a single packet-routing methodology built on an optical transport layer that combines the flexibility and cost of data networks with telecom-grade reliability. While this vision is appealing in its simplicity and shared widely, specifics of implementation raise many challenges and differences of opinion. For example, many seek to expand the role of Ethernet in these transport networks, while massive efforts are underway to make traditional TDM networks more data friendly within an evolved but backward-compatible SDH/SONET (synchronous digital hierarchy and synchronous optical network) multiplexing hierarchy. From this common underlying theme follow many specific instantiations. Examples include the convergence at the physical, logical, and operational levels of voice and data, video and data, private-line and virtual private-line, fixed and mobile, and local and long-haul services. These trends have many consequences for consumers, vendors, and carriers. Faced with large volumes of low-margin data traffic mixed with traditional voice services, the need for capital conservation and operational efficiency drives carriers away from today's separate overlay networks for each service and towards "converged" platforms. For example, cable operators require transport of multiple services over both hybrid fiber coax (HFC) and DWDM transport technologies. Local carriers seek an economical architecture to deliver integrated services on optically enabled broadband-access networks. Services over wireless-access networks must coexist with those from wired networks. In each case, convergence of networks and services inspires an important set of questions and challenges, driven by the need for low cost, operational efficiency, service performance requirements, and optical transport technology options. This Feature Issue explores the various interpretations and implications of network convergence pertinent to optical networking. How does convergence affect the evolution of optical transport-layer and control approaches? Are the implied directions consistent with research vision for optical networks? Substantial challenges remain. Papers are solicited across the broad spectrum of interests. These include, but are not limited to: Architecture, design and performance of optical wide-area-network (WAN), metro, and access networks Integration strategies for multiservice transport platforms Access methods that bridge traditional and emerging services Network signaling and control methodologies All-optical packet routing and switching techniques
NASA Astrophysics Data System (ADS)
Darcie, Thomas E.; Doverspike, Robert; Zirngibl, Martin; Korotky, Steven K.
2005-04-01
Call for Papers: Convergence The Journal of Optical Networking (JON) invites submissions to a special issue on Convergence. Convergence has become a popular theme in telecommunications, one that has broad implications across all segments of the industry. Continual evolution of technology and applications continues to erase lines between traditionally separate lines of business, with dramatic consequences for vendors, service providers, and consumers. Spectacular advances in all layers of optical networking-leading to abundant, dynamic, cost-effective, and reliable wide-area and local-area connections-have been essential drivers of this evolution. As services and networks continue to evolve towards some notion of convergence, the continued role of optical networks must be explored. One vision of convergence renders all information in a common packet (especially IP) format. This vision is driven by the proliferation of data services. For example, time-division multiplexed (TDM) voice becomes VoIP. Analog cable-television signals become MPEG bits streamed to digital set-top boxes. T1 or OC-N private lines migrate to Ethernet virtual private networks (VPNs). All these packets coexist peacefully within a single packet-routing methodology built on an optical transport layer that combines the flexibility and cost of data networks with telecom-grade reliability. While this vision is appealing in its simplicity and shared widely, specifics of implementation raise many challenges and differences of opinion. For example, many seek to expand the role of Ethernet in these transport networks, while massive efforts are underway to make traditional TDM networks more data friendly within an evolved but backward-compatible SDH/SONET (synchronous digital hierarchy and synchronous optical network) multiplexing hierarchy. From this common underlying theme follow many specific instantiations. Examples include the convergence at the physical, logical, and operational levels of voice and data, video and data, private-line and virtual private-line, fixed and mobile, and local and long-haul services. These trends have many consequences for consumers, vendors, and carriers. Faced with large volumes of low-margin data traffic mixed with traditional voice services, the need for capital conservation and operational efficiency drives carriers away from today's separate overlay networks for each service and towards "converged" platforms. For example, cable operators require transport of multiple services over both hybrid fiber coax (HFC) and DWDM transport technologies. Local carriers seek an economical architecture to deliver integrated services on optically enabled broadband-access networks. Services over wireless-access networks must coexist with those from wired networks. In each case, convergence of networks and services inspires an important set of questions and challenges, driven by the need for low cost, operational efficiency, service performance requirements, and optical transport technology options. This Feature Issue explores the various interpretations and implications of network convergence pertinent to optical networking. How does convergence affect the evolution of optical transport-layer and control approaches? Are the implied directions consistent with research vision for optical networks? Substantial challenges remain. Papers are solicited across the broad spectrum of interests. These include, but are not limited to: Architecture, design and performance of optical wide-area-network (WAN), metro, and access networks Integration strategies for multiservice transport platforms Access methods that bridge traditional and emerging services Network signaling and control methodologies All-optical packet routing and switching techniques
Saddeek, Ali Mohamed
2017-01-01
Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
NASA Astrophysics Data System (ADS)
Jang, T. S.
2018-03-01
A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Cotton-Mouton effect and shielding polarizabilities of ethylene: An MCSCF study
NASA Astrophysics Data System (ADS)
Coriani, Sonia; Rizzo, Antonio; Ruud, Kenneth; Helgaker, Trygve
1997-03-01
The static hypermagnetizabilities and nuclear shielding polarizabilities of the carbon and hydrogen atoms of ethylene have been computed using multiconfigurational linear-response theory and a finite-field method, in a mixed analytical-numerical approach. Extended sets of magnetic-field-dependent basis functions have been employed in large MCSCF calculations, involving active spaces giving rise to a few million configurations in the finite-field perturbed symmetry. The convergence of the observables with respect to the extension of the basis set as well as the effect of electron correlation have been investigated. Whereas for the shielding polarizabilities we can compare with other published SCF results, the ab initio estimates for the static hypermagnetizabilities and the observable to which they are related - the Cotton-Mouton constant, - are presented for the first time.
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
Concord, Convergence and Accommodation in Bilingual Children
ERIC Educational Resources Information Center
Radford, Andrew; Kupisch, Tanja; Koppe, Regina; Azzaro, Gabriele
2007-01-01
This paper examines the syntax of "GENDER CONCORD" in mixed utterances where bilingual children switch between a modifier in one language and a noun in another. Particular attention is paid to how children deal with potential gender mismatches between modifier and noun, i.e., if one of the languages has grammatical gender but the other does not,…
ERIC Educational Resources Information Center
Brown, Barbara B.; Werner, Carol M.; Amburgey, Jonathan W.; Szalay, Caitlin
2007-01-01
Guided walks near a light rail stop in downtown Salt Lake City, Utah, were examined using a 2 (gender) x 3 (route walkability: low-mixed-, or high-walkability features) design. Trained raters confirmed that more walkable segments had more traffic, environmental, and social safety; pleasing aesthetics; natural features; pedestrian amenities; and…
ERIC Educational Resources Information Center
Carrola, Paul A.; Yu, Kumlan; Sass, Daniel A.; Lee, Sang Min
2012-01-01
This study assessed scores from the Counselor Burnout Inventory for factorial validity, convergent and discriminant validity, internal consistency reliability, and measurement invariance across U.S. and Korean counselors. Although evidence existed for factorial validity across both groups, mixed results emerged for the other forms of validity and…
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Harada, Kyouji; Morimoto, Munenori; Sakakihara, Michio
2004-03-01
Several preconditioned iterative methods reported in the literature have been used for improving the convergence rate of the Gauss-Seidel method. In this article, on the basis of nonnegative matrix, comparisons between some splittings for such preconditioned matrices are derived. Simple numerical examples are also given.
Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W
2015-01-01
CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2010-01-01
Codes for predicting supersonic jet mixing and broadband shock-associated noise were assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. Two types of codes were used to make predictions. Fast running codes containing empirical models were used to compute both the mixing noise component and the shock-associated noise component of the jet noise spectrum. One Reynolds-averaged, Navier-Stokes-based code was used to compute only the shock-associated noise. To enable the comparisons of the predicted component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise components. Comparisons were made for 1/3-octave spectra and some power spectral densities using data from jets operating at 24 conditions covering essentially 6 fully expanded Mach numbers with 4 total temperature ratios.
NASA Astrophysics Data System (ADS)
Murphy, T. J.; Kyrala, G. A.; Krasheninnikova, N. S.; Bradley, P. A.; Cobble, J. A.; Tregillis, I. L.; Obrey, K. A. D.; Baumgaertel, J. A.; Hsu, S. C.; Shah, R. C.; Hakel, P.; Kline, J. L.; Schmitt, M. J.; Kanzleiter, R. J.; Batha, S. H.; Wallace, R. J.; Bhandarkar, S.; Fitzsimmons, P.; Hoppe, M.; Nikroo, A.; McKenty, P.
2016-03-01
Capsules driven with polar drive [1, 2] on the National Ignition Facility [3] are being used [4] to study mix in convergent geometry. In preparation for experiments that will utilize deuterated plastic shells with a pure tritium fill, hydrogen-filled capsules with copper- doped deuterated layers have been imploded on NIF to provide spectroscopic and nuclear measurements of capsule performance. Experiments have shown that the mix region, when composed of shell material doped with about 1% copper (by atom), reaches temperatures of about 2 keV, while undoped mixed regions reach about 3 keV. Based on the yield from these implosions, we estimate the thickness of CD that mixed into the gas as between about 0.25 and 0.43 μm of the inner capsule surface, corresponding to about 5 to 9 μg of material. Using 5 atm of tritium as the fill gas should result in over 1013 DT neutrons being produced, which is sufficient for neutron imaging [5].
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.
2017-01-01
The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
Mu, Lin; Wang, Junping; Ye, Xiu
2017-10-04
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
Community Game Day: Using an End-of-Life Conversation Game to Encourage Advance Care Planning.
Van Scoy, Lauren J; Reading, Jean M; Hopkins, Margaret; Smith, Brandi; Dillon, Judy; Green, Michael J; Levi, Benjamin H
2017-11-01
Advance care planning (ACP) is an important process that involves discussing and documenting one's values and preferences for medical care, particularly end-of-life treatments. This convergent, mixed-methods study assessed whether an end-of-life conversation card game is an acceptable and effective means for performing ACP for patients with chronic illness and/or their caregivers when deployed in a community setting. Twenty-two games (n = 93 participants) were held in community settings surrounding Hershey, PA in 2016. Participants were recruited using random sampling from patient databases and also convenience sampling (i.e., flyers). Quantitative questionnaires and qualitative focus group interviews were administered to assess the game experience and subsequent performance of ACP behaviors. Both quantitative and qualitative data found that Community Game Day was a well-received, positive experience for participants and 75% of participants performed ACP within three months post-intervention. These findings suggest that using a conversation game during community outreach is a useful approach for engaging patients and caregivers in ACP. The convergence of quantitative and qualitative data strongly supports the continued investigation of the game in randomized controlled trials. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-01-01
Objectives and design Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Setting Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. Methods and outcomes 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman’s correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Results Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists’ individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists’ knowledge (Spearman’s association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. Conclusion The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. Trial registration number ISRCTN49875385; Pre-results. PMID:28780544
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Beck, Cheryl Tatano; LoGiudice, Jenna; Gable, Robert K
2015-01-01
Secondary traumatic stress (STS) is an occupational hazard for clinicians who can experience symptoms of posttraumatic stress disorder (PTSD) from exposure to their traumatized patients. The purpose of this mixed-methods study was to determine the prevalence and severity of STS in certified nurse-midwives (CNMs) and to explore their experiences attending traumatic births. A convergent, parallel mixed-methods design was used. The American Midwifery Certification Board sent out e-mails to all their CNM members with a link to the SurveyMonkey study. The STS Scale was used to collect data for the quantitative strand. For the qualitative strand, participants were asked to describe their experiences of attending one or more traumatic births. IBM SPSS 21.0 (Version 21.0, Armonk, NY) was used to analyze the quantitative data, and Krippendorff content analysis was the method used to analyze the qualitative data. The sample consisted of 473 CNMs who completed the quantitative portion and 246 (52%) who completed the qualitative portion. In this sample, 29% of the CNMs reported high to severe STS, and 36% screened positive for the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition diagnostic criteria for PTSD due to attending traumatic births. The top 3 types of traumatic births described by the CNMs were fetal demise/neonatal death, shoulder dystocia, and infant resuscitation. Content analysis revealed 6 themes: 1) protecting my patients: agonizing sense of powerlessness and helplessness; 2) wreaking havoc: trio of posttraumatic stress symptoms; 3) circling the wagons: it takes a team to provide support … or not; 4) litigation: nowhere to go to unburden our souls; (5) shaken belief in the birth process: impacting midwifery practice; and 6 moving on: where do I go from here? The midwifery profession should acknowledge STS as a professional risk. © 2015 by the American College of Nurse-Midwives.
Global convergence of inexact Newton methods for transonic flow
NASA Technical Reports Server (NTRS)
Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.
1990-01-01
In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Predictions and Verification of an Isotope Marine Boundary Layer Model
NASA Astrophysics Data System (ADS)
Feng, X.; Posmentier, E. S.; Sonder, L. J.; Fan, N.
2017-12-01
A one-dimensional (1D), steady state isotope marine boundary layer (IMBL) model is constructed. The model includes meteorologically important features absent in Craig and Gordon type models, namely height-dependent diffusion/mixing and convergence of subsiding external air. Kinetic isotopic fractionation results from this height-dependent diffusion which starts as pure molecular diffusion at the air-water interface and increases linearly with height due to turbulent mixing. The convergence permits dry, isotopically depleted air subsiding adjacent to the model column to mix into ambient air. In δD-δ18O space, the model results fill a quadrilateral, of which three sides represent 1) vapor in equilibrium with various sea surface temperatures (SSTs) (high d18O boundary of quadrilateral); 2) mixture of vapor in equilibrium with seawater and vapor in the subsiding air (lower boundary depleted in both D and 18O); and 3) vapor that has experienced the maximum possible kinetic fractionation (high δD upper boundary). The results can be plotted in d-excess vs. δ18O space, indicating that these processes all cause variations in d-excess of MBL vapor. In particular, due to relatively high d-excess in the descending air, mixing of this air into the MBL causes an increase in d-excess, even without kinetic isotope fractionation. The model is tested by comparison with seven datasets of marine vapor isotopic ratios, with excellent correspondence; >95% of observational data fall within the quadrilateral area predicted by the model. The distribution of observations also highlights the significant influence of vapor from the nearby converging descending air on isotopic variations in the MBL. At least three factors may explain the <5% of observations that fall slightly outside of the predicted region in both δD-δ18O and d-excess - δ18O space: 1) variations in seawater isotopic ratios, 2) variations in isotopic composition of subsiding air, and 3) influence of sea spray. The model can be used for understanding the effects of boundary layer processes and meteorological conditions on isotopic composition of vapor within, and vapor fluxes through the MBL, and how changes in moisture source regions affect the isotopic composition of precipitation. The model can be applied to modern as well as paleo- climate conditions.
Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions
1977-09-01
Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to
Children's activities and their meanings for parents: a mixed-methods study in six Western cultures.
Harkness, Sara; Zylicz, Piotr Olaf; Super, Charles M; Welles-Nyström, Barbara; Bermúdez, Moisés Ríos; Bonichini, Sabrina; Moscardino, Ughetta; Mavridis, Caroline Johnston
2011-12-01
Theoretical perspectives and research in sociology, anthropology, sociolinguistics, and cultural psychology converge in recognizing the significance of children's time spent in various activities, especially in the family context. Knowing how children's time is deployed, however, only gives us a partial answer to how children acquire competence; the other part must take into account the culturally constructed meanings of activities, from the perspective of those who organize and direct children's daily lives. In this article, we report on a study of children's routine daily activities and on the meanings that parents attribute to them in six Western middle-class cultural communities located in Italy, The Netherlands, Poland, Spain, Sweden, and the United States (N = 183). Using week-long time diaries kept by parents, we first demonstrate similarities as well as significant differences in children's daily routines across the cultural samples. We then present brief vignettes--"a day in the life" --of children from each sample. Parent interviews were coded for themes in the meanings attributed to various activities. Excerpts from parent interviews, focusing on four major activities (meals, family time, play, school- or developmentally related activities), are presented to illustrate how cultural meanings and themes are woven into parents' organization and understanding of their children's daily lives. The results of this mixed-method approach provide a more reliable and nuanced picture of children's and families' daily lives than could be derived from either method alone.
NASA Astrophysics Data System (ADS)
Darcie, Thomas E.; Doverspike, Robert; Zirngibl, Martin; Korotky, Steven K.
2005-09-01
Call for Papers: Convergence The Journal of Optical Networking (JON) invites submissions to a special issue on Convergence. Convergence has become a popular theme in telecommunications, one that has broad implications across all segments of the industry. Continual evolution of technology and applications continues to erase lines between traditionally separate lines of business, with dramatic consequences for vendors, service providers, and consumers. Spectacular advances in all layers of optical networking-leading to abundant, dynamic, cost-effective, and reliable wide-area and local-area connections-have been essential drivers of this evolution. As services and networks continue to evolve towards some notion of convergence, the continued role of optical networks must be explored. One vision of convergence renders all information in a common packet (especially IP) format. This vision is driven by the proliferation of data services. For example, time-division multiplexed (TDM) voice becomes VoIP. Analog cable-television signals become MPEG bits streamed to digital set-top boxes. T1 or OC-N private lines migrate to Ethernet virtual private networks (VPNs). All these packets coexist peacefully within a single packet-routing methodology built on an optical transport layer that combines the flexibility and cost of data networks with telecom-grade reliability. While this vision is appealing in its simplicity and shared widely, specifics of implementation raise many challenges and differences of opinion. For example, many seek to expand the role of Ethernet in these transport networks, while massive efforts are underway to make traditional TDM networks more data friendly within an evolved but backward-compatible SDH/SONET (synchronous digital hierarchy and synchronous optical network) multiplexing hierarchy. From this common underlying theme follow many specific instantiations. Examples include the convergence at the physical, logical, and operational levels of voice and data, video and data, private-line and virtual private-line, fixed and mobile, and local and long-haul services. These trends have many consequences for consumers, vendors, and carriers. Faced with large volumes of low-margin data traffic mixed with traditional voice services, the need for capital conservation and operational efficiency drives carriers away from today's separate overlay networks for each service and towards "converged" platforms. For example, cable operators require transport of multiple services over both hybrid fiber coax (HFC) and DWDM transport technologies. Local carriers seek an economical architecture to deliver integrated services on optically enabled broadband-access networks. Services over wireless-access networks must coexist with those from wired networks. In each case, convergence of networks and services inspires an important set of questions and challenges, driven by the need for low cost, operational efficiency, service performance requirements, and optical transport technology options. This Feature Issue explores the various interpretations and implications of network convergence pertinent to optical networking. How does convergence affect the evolution of optical transport-layer and control approaches? Are the implied directions consistent with research vision for optical networks? Substantial challenges remain. Papers are solicited across the broad spectrum of interests. These include, but are not limited to: Architecture, design and performance of optical wide-area-network (WAN), metro, and access networks Integration strategies for multiservice transport platforms Access methods that bridge traditional and emerging services Network signaling and control methodologies All-optical packet routing and switching techniques To submit to this special issue, follow the normal procedure for submission to JON, indicating "Convergence feature" in the "Comments" field of the online submission form. For all other questions relating to this feature issue, please send an e-mail to jon@osa.org, subject line "Convergence." Additional information can be found on the JON website: http://www.osa-jon.org/submission/ Submission Deadline: 1 October 2005
Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.
Forti, Mauro; Nistri, Paolo; Quincampoix, Marc
2006-11-01
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.
Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation
NASA Astrophysics Data System (ADS)
Blumenthal, Benjamin J.; Zhan, Hongbin
2016-08-01
We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.
Alloying and Hardness of Eutectics with Nbss and Nb₅Si₃ in Nb-silicide Based Alloys.
Tsakiropoulos, Panos
2018-04-11
In Nb-silicide based alloys, eutectics can form that contain the Nb ss and Nb₅Si₃ phases. The Nb₅Si₃ can be rich or poor in Ti, the Nb can be substituted with other transition and refractory metals, and the Si can be substituted with simple metal and metalloid elements. For the production of directionally solidified in situ composites of multi-element Nb-silicide based alloys, data about eutectics with Nb ss and Nb₅Si₃ is essential. In this paper, the alloying behaviour of eutectics observed in Nb-silicide based alloys was studied using the parameters ΔH mix , ΔS mix , VEC (valence electron concentration), δ (related to atomic size), Δχ (related to electronegativity), and Ω (= T m ΔS mix /|ΔH mix |). The values of these parameters were in the ranges -41.9 < ΔH mix <-25.5 kJ/mol, 4.7 < ΔS mix < 15 J/molK, 4.33 < VEC < 4.89, 6.23 < δ < 9.44, 0.38 < Ω < 1.35, and 0.118 < Δχ < 0.248, with a gap in Δχ values between 0.164 and 0.181. Correlations between ΔS mix , Ω, ΔS mix , and VEC were found for all of the eutectics. The correlation between ΔH mix and δ for the eutectics was the same as that of the Nb ss , with more negative ΔH mix for the former. The δ versus Δχ map separated the Ti-rich eutectics from the Ti-poor eutectics, with a gap in Δχ values between 0.164 and 0.181, which is within the Δχ gap of the Nb ss . Eutectics were separated according to alloying additions in the Δχ versus VEC, Δχ versus
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient
Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming
2016-01-01
This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262
Well-tempered metadynamics converges asymptotically.
Dama, James F; Parrinello, Michele; Voth, Gregory A
2014-06-20
Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.
Well-Tempered Metadynamics Converges Asymptotically
NASA Astrophysics Data System (ADS)
Dama, James F.; Parrinello, Michele; Voth, Gregory A.
2014-06-01
Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.
Sliding mode control method having terminal convergence in finite time
NASA Technical Reports Server (NTRS)
Venkataraman, Subramanian T. (Inventor); Gulati, Sandeep (Inventor)
1994-01-01
An object of this invention is to provide robust nonlinear controllers for robotic operations in unstructured environments based upon a new class of closed loop sliding control methods, sometimes denoted terminal sliders, where the new class will enforce closed-loop control convergence to equilibrium in finite time. Improved performance results from the elimination of high frequency control switching previously employed for robustness to parametric uncertainties. Improved performance also results from the dependence of terminal slider stability upon the rate of change of uncertainties over the sliding surface rather than the magnitude of the uncertainty itself for robust control. Terminal sliding mode control also yields improved convergence where convergence time is finite and is to be controlled. A further object is to apply terminal sliders to robot manipulator control and benchmark performance with the traditional computed torque control method and provide for design of control parameters.
Superlinear convergence estimates for a conjugate gradient method for the biharmonic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, R.H.; Delillo, T.K.; Horn, M.A.
1998-01-01
The method of Muskhelishvili for solving the biharmonic equation using conformal mapping is investigated. In [R.H. Chan, T.K. DeLillo, and M.A. Horn, SIAM J. Sci. Comput., 18 (1997), pp. 1571--1582] it was shown, using the Hankel structure, that the linear system in [N.I. Muskhelishvili, Some Basic Problems of the Mathematical Theory of Elasticity, Noordhoff, Groningen, the Netherlands] is the discretization of the identity plus a compact operator, and therefore the conjugate gradient method will converge superlinearly. Estimates are given here of the superlinear convergence in the cases when the boundary curve is analytic or in a Hoelder class.
NASA Astrophysics Data System (ADS)
Qin, Fang; Fu, Yunfei
2016-06-01
Based on the merged measurements from the TRMM Precipitation Radar and Visible and Infrared Scanner, refined characteristics (intensity, frequency, vertical structure, and diurnal variation) and regional differences of the warm rain over the tropical and subtropical Pacific Ocean (40ffiS-40ffiN, 120ffiE-70ffiW) in boreal summer are investigated for the period 1998-2012. The results reveal that three warm rain types (phased, pure, and mixed) exist over these regions. The phased warm rain, which occurs during the developing or declining stage of precipitation weather systems, is located over the central to western Intertropical Convergence Zone, South Pacific Convergence Zone, and Northwest Pacific. Its occurrence frequency peaks at midnight and minimizes during daytime with a 5.5-km maximum echo top. The frequency of this warm rain type is about 2.2%, and it contributes to 40% of the regional total rainfall. The pure warm rain is characterized by typical stable precipitation with an echo top lower than 4 km, and mostly occurs in Southeast Pacific. Although its frequency is less than 1.3%, this type of warm rain accounts for 95% of the regional total rainfall. Its occurrence peaks before dawn and it usually disappears in the afternoon. For the mixed warm rain, some may develop into deep convective precipitation, while most are similar to those of the pure type. The mixed warm rain is mainly located over the ocean east of Hawaii. Its frequency is 1.2%, but this type of warm rain could contribute to 80% of the regional total rainfall. The results also uncover that the mixed and pure types occur over the regions where SST ranges from 295 to 299 K, accompanied by relatively strong downdrafts at 500 hPa. Both the mixed and pure warm rains happen in a more unstable atmosphere, compared with the phased warm rain.
Dual-scale Galerkin methods for Darcy flow
NASA Astrophysics Data System (ADS)
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
Carballo-Diéguez, Alex; Balán, Ivan C; Brown, William; Giguere, Rebecca; Dolezal, Curtis; Leu, Cheng-Shiun; Marzinke, Mark A; Hendrix, Craig W; Piper, Jeanna M; Richardson, Barbra A; Grossman, Cynthia; Johnson, Sherri; Gomez, Kailazarid; Horn, Stephanie; Kunjara Na Ayudhya, Ratiya Pamela; Patterson, Karen; Jacobson, Cindy; Bekker, Linda-Gail; Chariyalertsak, Suwat; Chitwarakorn, Anupong; Gonzales, Pedro; Holtz, Timothy H; Liu, Albert; Mayer, Kenneth H; Zorrilla, Carmen; Lama, Javier; McGowan, Ian; Cranston, Ross D
2017-01-01
Trials to assess microbicide safety require strict adherence to prescribed regimens. If adherence is suboptimal, safety cannot be adequately assessed. MTN-017 was a phase 2, randomized sequence, open-label, expanded safety and acceptability crossover study comparing 1) daily oral emtricitabine/tenofovir disoproxil fumarate (FTC/TDF), 2) daily use of reduced-glycerin 1% tenofovir (RG-TFV) gel applied rectally, and 3) RG-TFV gel applied before and after receptive anal intercourse (RAI)-if participants had no RAI in a week, they were asked to use two doses of gel within 24 hours. Product use was assessed by mixed methods including unused product return count, text messaging reports, and qualitative plasma TFV pharmacokinetic (PK) results. Convergence interviews engaged participants in determining the most accurate number of doses used based on product count and text messaging reports. Client-centered adherence counseling was also used. Participants (N = 187) were men who have sex with men and transgender women enrolled in the United States (42%), Thailand (29%), Peru (19%) and South Africa (10%). Mean age was 31.4 years (range 18-64 years). Based on convergence interviews, over an 8-week period, 94% of participants had ≥80% adherence to daily tablet, 41% having perfect adherence; 83% had ≥80% adherence to daily gel, 29% having perfect adherence; and 93% had ≥80% adherence to twice-weekly use during the RAI-associated gel regimen, 75% having perfect adherence and 77% having ≥80% adherence to gel use before and after RAI. Only 4.4% of all daily product PK results were undetectable and unexpected (TFV concentrations <0.31 ng/mL) given self-reported product use near sampling date. The mixed methods adherence measurement indicated high adherence to product use in all three regimens. Adherence to RAI-associated rectal gel use was as high as adherence to daily oral PrEP. A rectal microbicide gel, if efficacious, could be an alternative for individuals uninterested in daily oral PrEP.
O'Neill, Barbara J; Dwyer, Trudy; Reid-Searl, Kerry; Parkinson, Lynne
2018-03-01
To predict the factors that are most important in explaining nursing staff intentions towards early detection of the deteriorating health of a resident and providing subacute care in the nursing home setting. Nursing staff play a pivotal role in managing the deteriorating resident and determining whether the resident needs to be transferred to hospital or remain in the nursing home; however, there is a dearth of literature that explains the factors that influence their intentions. This information is needed to underpin hospital avoidance programs that aim to enhance nursing confidence and skills in this area. A convergent parallel mixed-methods study, using the theory of planned behaviour as a framework. Surveys and focus groups were conducted with nursing staff (n = 75) at a 94-bed nursing home at two points in time, prior to and following the implementation of a hospital avoidance program. The quantitative and qualitative data were analysed separately and merged during final analysis. Nursing staff had strong intentions, a positive attitude that became significantly more positive with the hospital avoidance program in place, and a reasonable sense of control; however, the influence of important referents was the strongest predictor of intention towards managing residents with deteriorating health. Support from a hospital avoidance program empowered staff and increased confidence to intervene. The theory of planned behaviour served as an effective framework for identifying the strong influence referents had on nursing staff intentions around managing residents with deteriorating health. Although nursing staff had a reasonable sense of control over this area of their work, they believed they benefitted from a hospital avoidance program initiated by the nursing home. Managers implementing hospital avoidance programs should consider the role of referents, appraise the known barriers and facilitators and take steps to identify those unique to their local situation. All levels of nursing staff play a role in preventing hospitalisation and should be consulted in the design, implementation and evaluation of any hospital avoidance strategies. © 2017 John Wiley & Sons Ltd.
A semi-implicit finite element method for viscous lipid membranes
NASA Astrophysics Data System (ADS)
Rodrigues, Diego S.; Ausas, Roberto F.; Mut, Fernando; Buscaglia, Gustavo C.
2015-10-01
A finite element formulation to approximate the behavior of lipid membranes is proposed. The mathematical model incorporates tangential viscous stresses and bending elastic forces, together with the inextensibility constraint and the enclosed volume constraint. The membrane is discretized by a surface mesh made up of planar triangles, over which a mixed formulation (velocity-curvature) is built based on the viscous bilinear form (Boussinesq-Scriven operator) and the Laplace-Beltrami identity relating position and curvature. A semi-implicit approach is then used to discretize in time, with piecewise linear interpolants for all variables. Two stabilization terms are needed: The first one stabilizes the inextensibility constraint by a pressure-gradient-projection scheme (Codina and Blasco (1997) [33]), the second couples curvature and velocity to improve temporal stability, as proposed by Bänsch (2001) [36]. The volume constraint is handled by a Lagrange multiplier (which turns out to be the internal pressure), and an analogous strategy is used to filter out rigid-body motions. The nodal positions are updated in a Lagrangian manner according to the velocity solution at each time step. An automatic remeshing strategy maintains suitable refinement and mesh quality throughout the simulation. Numerical experiments show the convergent and robust behavior of the proposed method. Stability limits are obtained from numerous relaxation tests, and convergence with mesh refinement is confirmed both in the relaxation transient and in the final equilibrium shape. Virtual tweezing experiments are also reported, computing the dependence of the deformed membrane shape with the tweezing velocity (a purely dynamical effect). For sufficiently high velocities, a tether develops which shows good agreement, both in its final radius and in its transient behavior, with available analytical solutions. Finally, simulation results of a membrane subject to the simultaneous action of six tweezers illustrate the robustness of the method.
Zecevic, Aleksandra A; Li, Alvin Ho-Ting; Ngo, Charity; Halligan, Michelle; Kothari, Anita
2017-06-01
The purpose of this study was to assess the facilitators and barriers to implementation of the Systemic Falls Investigative Method (SFIM) on selected hospital units. A cross-sectional explanatory mixed methods design was used to converge results from a standardized safety culture survey with themes that emerged from interviews and focus groups. Findings were organized by six elements of the Ottawa Model of Research Use framework. A geriatric rehabilitation unit of an acute care hospital and a neurological unit of a rehabilitation hospital were selected purposefully due to the high frequency of falls. Hospital staff who took part in: surveys (n = 39), interviews (n = 10) and focus groups (n = 12), and 38 people who were interviewed during falls investigations: fallers, family, unit staff and hospital management. Implementation of the SFIM to investigate fall occurrences. Percent of positive responses on the Modified Stanford Patient Safety Culture Survey Instrument converged with qualitative themes on facilitators and barriers for intervention implementation. Both hospital units had an overall poor safety culture which hindered intervention implementation. Facilitators were hospital accreditation, strong emphasis on patient safety, infrastructure and dedicated champions. Barriers included heavy workloads, lack of time, lack of resources and poor communication. Successful implementation of SFIM requires regulatory and organizational support, committed frontline staff and allocation of resources to identify active causes and latent contributing factors to falls. System-wide adjustments show promise for promotion of safety culture in hospitals where falls happen regularly. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyrya, Vitaliy; Mourad, Hashem Mohamed
We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.
NASA Astrophysics Data System (ADS)
Fan, Zhixiang; Sun, Weiguo; Zhang, Yi; Fu, Jia; Hu, Shide; Fan, Qunchao
2018-03-01
An interpolation method named difference algebraic converging method for opacity (DACMo) is proposed to study the opacities and transmissions of metal plasmas. The studies on iron plasmas at temperatures near the solar convection zone show that (1) the DACMo values reproduce most spectral structures and magnitudes of experimental opacities and transmissions. (2) The DACMo can be used to predict unknown opacities at other temperature Te' and density ρ' using the opacity constants obtained at ( Te , ρ). (3) The DACMo may predict reasonable opacities which may not be available experimentally but the least-squares (LS) method does not. (4) The computational speed of the DACMo is at least 10 times faster than that of the original difference converging method for opacity.
NASA Astrophysics Data System (ADS)
Weller, Evan; Jakob, Christian; Reeder, Michael
2017-04-01
Precipitation is often organized along coherent lines of low-level convergence, which at longer time and space scales form well-known convergence zones over the tropical oceans. Here, an automated, objective method is used to identify instantaneous low-level convergence lines in the current climate of CMIP5 models and compared with reanalysis data results. Identified convergence lines are combined with precipitation to assess the extent to which precipitation around the globe is associated with convergence in the lower troposphere. Differences between the current climate of the models and observations are diagnosed in terms of the frequency and intensity of both precipitation associated with convergence lines and that which is not. Future changes in frequency and intensity of convergence lines, and associated precipitation, are also investigated for their contribution to the simulated future changes in total precipitation.
ERIC Educational Resources Information Center
Durston, Sarah; Konrad, Kerstin
2007-01-01
This paper aims to illustrate how combining multiple approaches can inform us about the neurobiology of ADHD. Converging evidence from genetic, psychopharmacological and functional neuroimaging studies has implicated dopaminergic fronto-striatal circuitry in ADHD. However, while the observation of converging evidence from multiple vantage points…
A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.
Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming
2014-07-01
In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1988-01-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusion into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tadmor, E.
1988-07-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusionmore » into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).« less
ERIC Educational Resources Information Center
Bose, Stacey; Roberts, Laura; White, George
2017-01-01
This study uses mixed methodology of the convergent design to examine stakeholder perceptions toward a transferred model of accreditation for national Christian schools in Latin America. Parents, teachers, and leaders from five accredited schools participated in an accreditation survey. One parent, teacher, and leader from each of the five…
Large-Eddy Simulations of Tropical Convective Systems, the Boundary Layer, and Upper Ocean Coupling
2014-09-30
warmer profile through greater latent heat release. Resulting temperature profiles all follow essentially moist adiabats in the upper troposphere ...default RRTM ozone concentration profile). Greater convective mixing deepens the tropopause for cases with stronger moisture flux convergence. Case...with tropospheric temperatures about 4 degrees cooler than the original temperature profile. This case represents conditions during the suppressed
Qualitative Flow Visualization of a 110-N Hydrogen/Oxygen Laboratory Model Thruster
NASA Technical Reports Server (NTRS)
deGroot, Wim A.; McGuire, Thomas J.; Schneider, Steven J.
1997-01-01
The flow field inside a 110 N gaseous hydrogen/oxygen thruster was investigated using an optically accessible, two-dimensional laboratory test model installed in a high altitude chamber. The injector for this study produced an oxidizer-rich core flow, which was designed to fully mix and react inside a fuel-film sleeve insert before emerging into the main chamber section, where a substantial fuel film cooling layer was added to protect the chamber wall. Techniques used to investigate the flow consisted of spontaneous Raman spectra measurements, visible emission imaging, ultraviolet hydroxyl spectroscopy, and high speed schlieren imaging. Experimental results indicate that the oxygen rich core flow continued to react while emerging from the fuel-film sleeve, suggesting incomplete mixing of the hydrogen in the oxygen rich core flow. Experiments also showed that the fuel film cooling protective layer retained its integrity throughout the straight section of the combustion chamber. In the converging portion of the chamber, however, a turbulent reaction zone near the wall destroyed the integrity of the film layer, a result which implies that a lower contraction angle may improve the fuel film cooling in the converging section and extend the hardware lifetime.
NASA Astrophysics Data System (ADS)
Costin, Ovidiu; Dunne, Gerald V.
2018-01-01
We show how to convert divergent series, which typically occur in many applications in physics, into rapidly convergent inverse factorial series. This can be interpreted physically as a novel resummation of perturbative series. Being convergent, these new series allow rigorous extrapolation from an asymptotic region with a large parameter, to the opposite region where the parameter is small. We illustrate the method with various physical examples, and discuss how these convergent series relate to standard methods such as Borel summation, and also how they incorporate the physical Stokes phenomenon. We comment on the relation of these results to Dyson’s physical argument for the divergence of perturbation theory. This approach also leads naturally to a wide class of relations between bosonic and fermionic partition functions, and Klein-Gordon and Dirac determinants.
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Magnetically-Driven Convergent Instability Growth platform on Z.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knapp, Patrick; Mattsson, Thomas; Martin, Matthew
Hydrodynamic instability growth is a fundamentally limiting process in many applications. In High Energy Density Physics (HEDP) systems such as inertial confinement fusion implosions and stellar explosions, hydro instabilities can dominate the evolution of the object and largely determine the final state achievable. Of particular interest is the process by which instabilities cause perturbations at a density or material interface to grow nonlinearly, introducing vorticity and eventually causing the two species to mix across the interface. Although quantifying instabilities has been the subject of many investigations in planar geometry, few have been done in converging geometry. During FY17, the teammore » executed six convergent geometry instability experiments. Based on earlier results, the platform was redesigned and improved with respect to load centering at installation making the installation reproducible and development of a new 7.2 keV, Co He-a backlighter system to better penetrate the liner. Together, the improvements yielded significantly improved experimental results. The results in FY17 demonstrate the viability of using experiments on Z to quantify instability growth in cylindrically convergent geometry. Going forward, we will continue the partnership with staff and management at LANL to analyze the past experiments, compare to hydrodynamics growth models, and design future experiments.« less
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
The Effect of Molar Axial Wall Height on CAD/CAM Ceramic Crowns With Moderate Occlusal Convergence
2006-05-01
CEREC e.max* CAD crowns on preparations with moderate total occlusal convergence (16 degrees). Methods: 60 recently-extracted maxillary third molars ...The Effect of Molar Axial Wall Height on CAD/CAM Ceramic Crowns With Moderate Occlusal Convergence Wyeth L. Hoopes The Effect of Molar Axial Wall... Molar Axial Wall Height on CAD/CAM Ceramic Crowns With Moderate Occlusal Convergence is appropria tely acknowledged and beyond brief excerpts, is with
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
2017-01-01
Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014). Conclusions This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. PMID:28292738
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-08-04
Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman's correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists' individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists' knowledge (Spearman's association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. ISRCTN49875385; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Medical students can teach communication skills - a mixed methods study of cross-year peer tutoring.
Nomura, Osamu; Onishi, Hirotaka; Kato, Hiroyuki
2017-06-15
Cross-year peer tutoring (CYPT) of medical students is recognized as an effective learning tool. The aim of this study is to investigate the non-inferiority of the objective outcome of medical interview training with CYPT compared with the results of faculty-led training (FLT), and to explore qualitatively the educational benefits of CYPT. We conducted a convergent mixed methods study including a randomized controlled non-inferiority trial and two focus groups. For the CYPT group, teaching was led by six student tutors from year 5. In the FLT group, students were taught by six physicians. Focus groups for student learners (four tutees) and student teachers (six tutors) were conducted following the training session. One hundred sixteen students agreed to participate. The OSCE scores of the CYPT group and FLT group were 91.4 and 91.2, respectively. The difference in the mean score was 0.2 with a 95% CI of -1.8 to 2.2 within the predetermined non-inferiority margin of 3.0. By analyzing the focus groups, we extracted 13 subordinate concepts and formed three categories including 'Benefits of CYPT', 'Reflections of tutees and tutors' and 'Comparison with faculty', which affected the interactions among tutees, tutors, and faculty. CYPT is effective for teaching communication skills to medical students and for enhancing reflective learning among both tutors and tutees.
Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob
2017-12-12
The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Pan, Han; Jing, Zhongliang; Qiao, Lingfeng; Li, Minzhe
2017-09-25
Image restoration is a difficult and challenging problem in various imaging applications. However, despite of the benefits of a single overcomplete dictionary, there are still several challenges for capturing the geometric structure of image of interest. To more accurately represent the local structures of the underlying signals, we propose a new problem formulation for sparse representation with block-orthogonal constraint. There are three contributions. First, a framework for discriminative structured dictionary learning is proposed, which leads to a smooth manifold structure and quotient search spaces. Second, an alternating minimization scheme is proposed after taking both the cost function and the constraints into account. This is achieved by iteratively alternating between updating the block structure of the dictionary defined on Grassmann manifold and sparsifying the dictionary atoms automatically. Third, Riemannian conjugate gradient is considered to track local subspaces efficiently with a convergence guarantee. Extensive experiments on various datasets demonstrate that the proposed method outperforms the state-of-the-art methods on the removal of mixed Gaussian-impulse noise.