NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Multilevel processes and cultural adaptation: Examples from past and present small-scale societies.
Reyes-García, V; Balbo, A L; Gomez-Baggethun, E; Gueze, M; Mesoudi, A; Richerson, P; Rubio-Campillo, X; Ruiz-Mallén, I; Shennan, S
2016-12-01
Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and emergent properties or collective behaviours. After a brief review of the concept of "cultural adaptation" from the perspective of cultural evolutionary theory and resilience theory, the core of the paper is constructed around the exploration of multilevel processes occurring at the temporal, spatial, social and political scales. We do so by examining small-scale societies' case studies. In each section, we discuss the importance of the selected scale for understanding cultural adaptation and then present an example that illustrates how multilevel processes in the selected scale help explain observed patterns in the cultural adaptive process. We end the paper discussing the potential of modelling and computer simulation for studying multilevel processes in cultural adaptation.
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
Multilevel processes and cultural adaptation: Examples from past and present small-scale societies
Reyes-García, V.; Balbo, A. L.; Gomez-Baggethun, E.; Gueze, M.; Mesoudi, A.; Richerson, P.; Rubio-Campillo, X.; Ruiz-Mallén, I.; Shennan, S.
2016-01-01
Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and emergent properties or collective behaviours. After a brief review of the concept of “cultural adaptation” from the perspective of cultural evolutionary theory and resilience theory, the core of the paper is constructed around the exploration of multilevel processes occurring at the temporal, spatial, social and political scales. We do so by examining small-scale societies’ case studies. In each section, we discuss the importance of the selected scale for understanding cultural adaptation and then present an example that illustrates how multilevel processes in the selected scale help explain observed patterns in the cultural adaptive process. We end the paper discussing the potential of modelling and computer simulation for studying multilevel processes in cultural adaptation. PMID:27774109
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
Adaptive relaxation for the steady-state analysis of Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham
1994-01-01
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
A Note on Multigrid Theory for Non-nested Grids and/or Quadrature
NASA Technical Reports Server (NTRS)
Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.
1996-01-01
We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
NASA Astrophysics Data System (ADS)
Al-Chalabi, Rifat M. Khalil
1997-09-01
Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying
2014-01-01
Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…
A robust multilevel simultaneous eigenvalue solver
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1993-01-01
Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.
Axelsson, Robert; Angelstam, Per; Myhrman, Lennart; Sädbom, Stefan; Ivarsson, Milis; Elbakidze, Marine; Andersson, Kenneth; Cupa, Petr; Diry, Christian; Doyon, Frederic; Drotz, Marcus K; Hjorth, Arne; Hermansson, Jan Olof; Kullberg, Thomas; Lickers, F Henry; McTaggart, Johanna; Olsson, Anders; Pautov, Yurij; Svensson, Lennart; Törnblom, Johan
2013-03-01
To implement policies about sustainable landscapes and rural development necessitates social learning about states and trends of sustainability indicators, norms that define sustainability, and adaptive multi-level governance. We evaluate the extent to which social learning at multiple governance levels for sustainable landscapes occur in 18 local development initiatives in the network of Sustainable Bergslagen in Sweden. We mapped activities over time, and interviewed key actors in the network about social learning. While activities resulted in exchange of experiences and some local solutions, a major challenge was to secure systematic social learning and make new knowledge explicit at multiple levels. None of the development initiatives used a systematic approach to secure social learning, and sustainability assessments were not made systematically. We discuss how social learning can be improved, and how a learning network of development initiatives could be realized.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
A multilevel correction adaptive finite element method for Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.
1991-01-01
An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.
On the Multilevel Solution Algorithm for Markov Chains
NASA Technical Reports Server (NTRS)
Horton, Graham
1997-01-01
We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.
NASA Astrophysics Data System (ADS)
Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.
1997-08-01
A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.
Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-01-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764
Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-12-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
NASA Astrophysics Data System (ADS)
Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca
2017-07-01
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
Daily Stressors in School-Age Children: A Multilevel Approach
ERIC Educational Resources Information Center
Escobar, Milagros; Alarcón, Rafael; Blanca, María J.; Fernández-Baena, F. Javier; Rosel, Jesús F.; Trianes, María Victoria
2013-01-01
This study uses hierarchical or multilevel modeling to identify variables that contribute to daily stressors in a population of schoolchildren. Four hierarchical levels with several predictive variables were considered: student (age, sex, social adaptation of the student, number of life events and chronic stressors experienced, and educational…
Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations
NASA Technical Reports Server (NTRS)
Moon, Young J.; Liou, Meng-Sing
1989-01-01
Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Tuikkala, Johannes; Vähämaa, Heidi; Salmela, Pekka; Nevalainen, Olli S; Aittokallio, Tero
2012-03-26
Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications.
Liu, Peng; Wang, Qiong; Niu, Meixing; Wang, Dunyou
2017-08-10
Combining multi-level quantum mechanics theories and molecular mechanics with an explicit water model, we investigated the ring opening process of guanine damage by hydroxyl radical in aqueous solution. The detailed, atomic-level ring-opening mechanism along the reaction pathway was revealed in aqueous solution at the CCSD(T)/MM levels of theory. The potentials of mean force in aqueous solution were calculated at both the DFT/MM and CCSD(T)/MM levels of the theory. Our study found that the aqueous solution has a significant effect on this reaction in solution. In particular, by comparing the geometries of the stationary points between in gas phase and in aqueous solution, we found that the aqueous solution has a tremendous impact on the torsion angles much more than on the bond lengths and bending angles. Our calculated free-energy barrier height 31.6 kcal/mol at the CCSD(T)/MM level of theory agrees well with the one obtained based on gas-phase reaction profile and free energies of solvation. In addition, the reaction path in gas phase was also mapped using multi-level quantum mechanics theories, which shows a reaction barrier at 19.2 kcal/mol at the CCSD(T) level of theory, agreeing very well with a recent ab initio calculation result at 20.8 kcal/mol.
2012-01-01
Background Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. Methods We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. Results The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. Conclusions By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications. PMID:22448851
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Kofi Akamani
2014-01-01
There is growing recognition that the sustainable governance of water resources requires building social-ecological resilience against future surprises. Adaptive comanagement, a distinct institutional mechanism that combines the learning focus of adaptive management with the multilevel linkages of comanagement, has recently emerged as a promising mechanism for building...
ERIC Educational Resources Information Center
Benish, Steven G.
2010-01-01
Culturally adapted psychotherapy has potential to improve psychotherapy outcomes for ethnic and racial minorities and solve a decades-long conundrum that alteration of specific ingredients does not improve psychotherapy outcomes. Adaptation of the cultural explanation of illness, known as the anthropological Myth in universal healing practices…
Exploring Adaptability through Learning Layers and Learning Loops
ERIC Educational Resources Information Center
Lof, Annette
2010-01-01
Adaptability in social-ecological systems results from individual and collective action, and multi-level interactions. It can be understood in a dual sense as a system's ability to adapt to disturbance and change, and to navigate system transformation. Inherent in this conception, as found in resilience thinking, are the concepts of learning and…
ERIC Educational Resources Information Center
Du, Ping
2008-01-01
This study used sampling survey data from rural elementary schools in western China to analyze school adaptability, which is the representative noncognitive development of rural elementary students. It also investigated factors influencing the school adaptability of elementary school students at an individual and school level by using production…
Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions
ERIC Educational Resources Information Center
Pokropek, Artur
2015-01-01
This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…
Bélanger, Diane; Abdous, Belkacem; Valois, Pierre; Gosselin, Pierre; Sidi, Elhadji A Laouan
2016-02-12
This study identifies the characteristics and perceptions related to the individual, the dwelling and the neighbourhood of residence associated with the prevalence of self-reported adverse health impacts and an adaptation index when it is very hot and humid in summer in the most disadvantaged sectors of the nine most populous cities of Québec, Canada, in 2011. The study uses a cross-sectional design and a stratified representative sample; 3485 people (individual-level) were interviewed in their residence. They lived in 1647 buildings (building-level) in 87 most materially and socially disadvantaged census dissemination areas (DA-level). Multilevel analysis was used to perform 3-level models nested one in the other to examine individual impacts as well as the adaptation index. For the prevalence of impacts, which is 46 %, the logistic model includes 13 individual-level indicators (including air conditioning and the adaptation index) and 1 building-level indicator. For the adaptation index, with values ranging from -3 to +3, the linear model has 10 individual-level indicators, 1 building-level indicator and 2 DA-level indicators. Of all these indicators, 9 were associated to the prevalence of impacts only and 8 to the adaptation index only. This 3-level analysis shows the differential importance of the characteristics of residents, buildings and their surroundings on self-reported adverse health impacts and on adaptation (other than air conditioning) under hot and humid summer conditions. It also identifies indicators specific to impacts or adaptation. People with negative health impacts from heat rely more on adaptation strategies while low physical activity and good dwelling/building insulation lead to lower adaptation. Better neighbourhood walkability favors adaptations other than air conditioning. Thus, adaptation to heat in these neighbourhoods seems reactive rather than preventive. These first multi-level insights pave the way for the development of a theoretical framework of the process from heat exposure to impacts and adaptation for research, surveillance and public health interventions at all relevant levels.
Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José
2013-11-18
We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
NASA Astrophysics Data System (ADS)
Wałach, Daniel; Sagan, Joanna; Gicala, Magdalena
2017-10-01
The paper presents an environmental and economic analysis of the material solutions of multi-level garage. The construction project approach considered reinforced concrete structure under conditions of use of ordinary concrete and high-performance concrete (HPC). Using of HPC allowed to significant reduction of reinforcement steel, mainly in compression elements (columns) in the construction of the object. The analysis includes elements of the methodology of integrated lice cycle design (ILCD). By making multi-criteria analysis based on established weight of the economic and environmental parameters, three solutions have been evaluated and compared within phase of material production (information modules A1-A3).
Zhang, Qi-Jian; Miao, Shi-Feng; Li, Hua; He, Jing-Hui; Li, Na-Jun; Xu, Qing-Feng; Chen, Dong-Yun; Lu, Jian-Mei
2017-06-19
Small-molecule-based multilevel memory devices have attracted increasing attention because of their advantages, such as super-high storage density, fast reading speed, light weight, low energy consumption, and shock resistance. However, the fabrication of small-molecule-based devices always requires expensive vacuum-deposition techniques or high temperatures for spin-coating. Herein, through rational tailoring of a previous molecule, DPCNCANA (4,4'-(6,6'-bis(2-octyl-1,3-dioxo-2,3-dihydro-1H-benzo[de]isoquinolin-6-yl)-9H,9'H-[3,3'-bicarbazole]-9,9'-diyl)dibenzonitrile), a novel bat-shaped A-D-A-type (A-D-A=acceptor-donor-acceptor) symmetric framework has been successfully synthesized and can be dissolved in common solvents at room temperature. Additionally, it has a low-energy bandgap and dense intramolecular stacking in the film state. The solution-processed memory devices exhibited high-performance nonvolatile multilevel data-storage properties with low switching threshold voltages of about -1.3 and -2.7 V, which is beneficial for low power consumption. Our result should prompt the study of highly efficient solution-processed multilevel memory devices in the field of organic electronics. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kim, Seung-Tae; Cho, Won-Ju
2018-01-01
We fabricated a resistive random access memory (ReRAM) device on a Ti/AlO x /Pt structure with solution-processed AlO x switching layer using microwave irradiation (MWI), and demonstrated multi-level cell (MLC) operation. To investigate the effect of MWI power on the MLC characteristics, post-deposition annealing was performed at 600-3000 W after AlO x switching layer deposition, and the MLC operation was compared with as-deposited (as-dep) and conventional thermally annealing (CTA) treated devices. All solution-processed AlO x -based ReRAM devices exhibited bipolar resistive switching (BRS) behavior. We found that these devices have four-resistance states (2 bits) of MLC operation according to the modulation of the high-resistance state (HRSs) through reset voltage control. Particularly, compared to the as-dep and CTA ReRAM devices, the MWI-treated ReRAM devices showed a significant increase in the memory window and stable endurance for multi-level operation. Moreover, as the MWI power increased, excellent MLC characteristics were exhibited because the resistance ratio between each resistance state was increased. In addition, it exhibited reliable retention characteristics without deterioration at 25 °C and 85 °C for 10 000 s. Finally, the relationship between the chemical characteristics of the solution-processed AlO x switching layer and BRS-based multi-level operation according to the annealing method and MWI power was investigated using x-ray photoelectron spectroscopy.
Multilevel adaptive control of nonlinear interconnected systems.
Motallebzadeh, Farzaneh; Ozgoli, Sadjaad; Momeni, Hamid Reza
2015-01-01
This paper presents an adaptive backstepping-based multilevel approach for the first time to control nonlinear interconnected systems with unknown parameters. The system consists of a nonlinear controller at the first level to neutralize the interaction terms, and some adaptive controllers at the second level, in which the gains are optimally tuned using genetic algorithm. The presented scheme can be used in systems with strong couplings where completely ignoring the interactions leads to problems in performance or stability. In order to test the suitability of the method, two case studies are provided: the uncertain double and triple coupled inverted pendulums connected by springs with unknown parameters. The simulation results show that the method is capable of controlling the system effectively, in both regulation and tracking tasks. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Cicchetti, Dante; Rogosch, Fred A.
2009-01-01
The study of resilience in maltreated children reveals the possibility of coping processes and resources on multiple levels of analysis as children strive to adapt under conditions of severe stress. In a maltreating context, aspects of self-organization, including self-esteem, self-reliance, emotion regulation, and adaptable yet reserved…
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Long bone reconstruction using multilevel lengthening of bone defect fragments.
Borzunov, Dmitry Y
2012-08-01
This paper presents experimental findings to substantiate the use of multilevel bone fragment lengthening for managing extensive long bone defects caused by diverse aetiologies and shows its clinical introduction which could provide a solution for the problem of reducing the total treatment time. Both experimental and clinical multilevel lengthening to bridge bone defect gaps was performed with the use of the Ilizarov method only. The experimental findings and clinical outcomes showed that multilevel defect fragment lengthening could provide sufficient bone formation and reduction of the total osteosynthesis time in one stage as compared to traditional Ilizarov bone transport. The method of multilevel regeneration enabled management of critical-size defects that measured on average 13.5 ± 0.7 cm in 78 patients. The experimental and clinical results proved the efficiency of the Ilizarov non-free multilevel bone plasty that can be recommended for practical use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filho, Faete; Maia, Helder Z; Mateus, Tiago Henrique D
2013-01-01
A new approach for modulation of an 11-level cascade multilevel inverter using selective harmonic elimination is presented in this paper. The dc sources feeding the multilevel inverter are considered to be varying in time, and the switching angles are adapted to the dc source variation. This method uses genetic algorithms to obtain switching angles offline for different dc source values. Then, artificial neural networks are used to determine the switching angles that correspond to the real-time values of the dc sources for each phase. This implies that each one of the dc sources of this topology can have different valuesmore » at any time, but the output fundamental voltage will stay constant and the harmonic content will still meet the specifications. The modulating switching angles are updated at each cycle of the output fundamental voltage. This paper gives details on the method in addition to simulation and experimental results.« less
Mathematical model comparing of the multi-level economics systems
NASA Astrophysics Data System (ADS)
Brykalov, S. M.; Kryanev, A. V.
2017-12-01
The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.
Suvak, Michael K; Walling, Sherry M; Iverson, Katherine M; Taft, Casey T; Resick, Patricia A
2009-12-01
Multilevel modeling is a powerful and flexible framework for analyzing nested data structures (e.g., repeated measures or longitudinal designs). The authors illustrate a series of multilevel regression procedures that can be used to elucidate the nature of the relationship between two variables across time. The goal is to help trauma researchers become more aware of the utility of multilevel modeling as a tool for increasing the field's understanding of posttraumatic adaptation. These procedures are demonstrated by examining the relationship between two posttraumatic symptoms, intrusion and avoidance, across five assessment points in a sample of rape and robbery survivors (n = 286). Results revealed that changes in intrusion were highly correlated with changes in avoidance over the 18-month posttrauma period.
Multigrid techniques for the solution of the passive scalar advection-diffusion equation
NASA Technical Reports Server (NTRS)
Phillips, R. E.; Schmidt, F. W.
1985-01-01
The solution of elliptic passive scalar advection-diffusion equations is required in the analysis of many turbulent flow and convective heat transfer problems. The accuracy of the solution may be affected by the presence of regions containing large gradients of the dependent variables. The multigrid concept of local grid refinement is a method for improving the accuracy of the calculations in these problems. In combination with the multilevel acceleration techniques, an accurate and efficient computational procedure is developed. In addition, a robust implementation of the QUICK finite-difference scheme is described. Calculations of a test problem are presented to quantitatively demonstrate the advantages of the multilevel-multigrid method.
Towards a Cross-Domain MapReduce Framework
2013-11-01
These Big Data applications typically run as a set of MapReduce jobs to take advantage of Hadoop’s ease of service deployment and large-scale...parallelism. Yet, Hadoop has not been adapted for multilevel secure (MLS) environments where data of different security classifications co-exist. To solve...multilevel security. I. INTRODUCTION The US Department of Defense (DoD) and US Intelligence Community (IC) recognize they have a Big Data problem
Muir, William M.; Cheng, Heng-Wei; Croney, Candace
2014-01-01
As consumers and society in general become more aware of ethical and moral dilemmas associated with intensive rearing systems, pressure is put on the animal and poultry industries to adopt alternative forms of housing. This presents challenges especially regarding managing competitive social interactions between animals. However, selective breeding programs are rapidly advancing, enhanced by both genomics and new quantitative genetic theory that offer potential solutions by improving adaptation of the bird to existing and proposed production environments. The outcomes of adaptation could lead to improvement of animal welfare by increasing fitness of the animal for the given environments, which might lead to increased contentment and decreased distress of birds in those systems. Genomic selection, based on dense genetic markers, will allow for more rapid improvement of traits that are expensive or difficult to measure, or have a low heritability, such as pecking, cannibalism, robustness, mortality, leg score, bone strength, disease resistance, and thus has the potential to address many poultry welfare concerns. Recently selection programs to include social effects, known as associative or indirect genetic effects (IGEs), have received much attention. Group, kin, multi-level, and multi-trait selection including IGEs have all been shown to be highly effective in reducing mortality while increasing productivity of poultry layers and reduce or eliminate the need for beak trimming. Multi-level selection was shown to increases robustness as indicated by the greater ability of birds to cope with stressors. Kin selection has been shown to be easy to implement and improve both productivity and animal well-being. Management practices and rearing conditions employed for domestic animal production will continue to change based on ethical and scientific results. However, the animal breeding tools necessary to provide an animal that is best adapted to these changing conditions are readily available and should be used, which will ultimately lead to the best possible outcomes for all impacted. PMID:25505483
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Sala, Marzio
In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system ismore » obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 10{sup 8} unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.« less
Sturgeon, John A; Zautra, Alex J; Arewasikporn, Anne
2014-02-01
The processes of individual adaptation to chronic pain are complex and occur across multiple domains. We examined the social, cognitive, and affective context of daily pain adaptation in individuals with fibromyalgia and osteoarthritis. By using a sample of 260 women with fibromyalgia or osteoarthritis, we examined the contributions of pain catastrophizing, negative interpersonal events, and positive interpersonal events to daily negative and positive affect across 30days of daily diary data. Individual differences and daily fluctuations in predictor variables were estimated simultaneously by utilizing multilevel structural equation modeling techniques. The relationships between pain and negative and positive affect were mediated by stable and day-to-day levels of pain catastrophizing as well as day-to-day positive interpersonal events, but not negative interpersonal events. There were significant and independent contributions of pain catastrophizing and positive interpersonal events to adaptation to pain and pain-related affective dysregulation. These effects occur both between persons and within a person's everyday life. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.
2007-04-01
In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less
Shi, Yiquan; Wolfensteller, Uta; Schubert, Torsten; Ruge, Hannes
2018-02-01
Cognitive flexibility is essential to cope with changing task demands and often it is necessary to adapt to combined changes in a coordinated manner. The present fMRI study examined how the brain implements such multi-level adaptation processes. Specifically, on a "local," hierarchically lower level, switching between two tasks was required across trials while the rules of each task remained unchanged for blocks of trials. On a "global" level regarding blocks of twelve trials, the task rules could reverse or remain the same. The current task was cued at the start of each trial while the current task rules were instructed before the start of a new block. We found that partly overlapping and partly segregated neural networks play different roles when coping with the combination of global rule reversal and local task switching. The fronto-parietal control network (FPN) supported the encoding of reversed rules at the time of explicit rule instruction. The same regions subsequently supported local task switching processes during actual implementation trials, irrespective of rule reversal condition. By contrast, a cortico-striatal network (CSN) including supplementary motor area and putamen was increasingly engaged across implementation trials and more so for rule reversal than for nonreversal blocks, irrespective of task switching condition. Together, these findings suggest that the brain accomplishes the coordinated adaptation to multi-level demand changes by distributing processing resources either across time (FPN for reversed rule encoding and later for task switching) or across regions (CSN for reversed rule implementation and FPN for concurrent task switching). © 2017 Wiley Periodicals, Inc.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
NASA Technical Reports Server (NTRS)
Mihalas, D.; Kunasz, P. B.
1978-01-01
The coupled radiative transfer and statistical equilibrium equations for multilevel ionic structures in the atmospheres of early-type stars are solved. Both lines and continua are treated consistently; the treatment is applicable throughout a transonic wind, and allows for the presence of background continuum sources and sinks in the transfer. An equivalent-two-level-atoms approach provides the solution for the equations. Calculations for simplified He (+)-like model atoms in parameterized isothermal wind models indicate that subordinate line profiles are sensitive to the assumed mass-loss rate, and to the assumed structure of the velocity law in the atmospheres.
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
Applying Critical Race Theory to Group Model Building Methods to Address Community Violence.
Frerichs, Leah; Lich, Kristen Hassmiller; Funchess, Melanie; Burrell, Marcus; Cerulli, Catherine; Bedell, Precious; White, Ann Marie
2016-01-01
Group model building (GMB) is an approach to building qualitative and quantitative models with stakeholders to learn about the interrelationships among multilevel factors causing complex public health problems over time. Scant literature exists on adapting this method to address public health issues that involve racial dynamics. This study's objectives are to (1) introduce GMB methods, (2) present a framework for adapting GMB to enhance cultural responsiveness, and (3) describe outcomes of adapting GMB to incorporate differences in racial socialization during a community project seeking to understand key determinants of community violence transmission. An academic-community partnership planned a 1-day session with diverse stakeholders to explore the issue of violence using GMB. We documented key questions inspired by critical race theory (CRT) and adaptations to established GMB "scripts" (i.e., published facilitation instructions). The theory's emphasis on experiential knowledge led to a narrative-based facilitation guide from which participants created causal loop diagrams. These early diagrams depict how violence is transmitted and how communities respond, based on participants' lived experiences and mental models of causation that grew to include factors associated with race. Participants found these methods useful for advancing difficult discussion. The resulting diagrams can be tested and expanded in future research, and will form the foundation for collaborative identification of solutions to build community resilience. GMB is a promising strategy that community partnerships should consider when addressing complex health issues; our experience adapting methods based on CRT is promising in its acceptability and early system insights.
ERIC Educational Resources Information Center
Bowman, Phillip J.
2006-01-01
This article applauds the strength-based model (SBM) of counseling but calls for an extension. In the existential or humanistic tradition, the SBM builds on emerging trends in psychology to highlight the importance of individual strengths in counseling interventions. However, a role strain and adaptation (RSA) approach extends the SBM to…
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
Coherent population transfer in multi-level Allen-Eberly models
NASA Astrophysics Data System (ADS)
Li, Wei; Cen, Li-Xiang
2018-04-01
We investigate the solvability of multi-level extensions of the Allen-Eberly model and the population transfer yielded by the corresponding dynamical evolution. We demonstrate that, under a matching condition of the frequency, the driven two-level system and its multi-level extensions possess a stationary-state solution in a canonical representation associated with a unitary transformation. As a consequence, we show that the resulting protocol is able to realize complete population transfer in a nonadiabatic manner. Moreover, we explore the imperfect pulsing process with truncation and display that the nonadiabatic effect in the evolution can lead to suppression to the cutoff error of the protocol.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei
2013-04-01
This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.
Real-time data compression of broadcast video signals
NASA Technical Reports Server (NTRS)
Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)
1991-01-01
A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.
Trosvik, Pål; de Muinck, Eric J; Rueness, Eli K; Fashing, Peter J; Beierschmitt, Evan C; Callingham, Kadie R; Kraus, Jacob B; Trew, Thomas H; Moges, Amera; Mekonnen, Addisu; Venkataraman, Vivek V; Nguyen, Nga
2018-05-05
The gelada monkey (Theropithecus gelada), endemic to the Ethiopian highlands, is the only graminivorous primate, i.e., it feeds mainly on grasses and sedges. In spite of known dental, manual, and locomotor adaptations, the intestinal anatomy of geladas is similar to that of other primates. We currently lack a clear understanding of the adaptations in digestive physiology necessary for this species to subsist on a graminoid-based diet, but digestion in other graminivores, such as ruminants, relies heavily on the microbial community residing in the gastrointestinal (GI) system. Furthermore, geladas form complex, multilevel societies, making them a suitable system for investigating links between sociality and the GI microbiota. Here, we explore the gastrointestinal microbiota of gelada monkeys inhabiting an intact ecosystem and document how factors like multilevel social structure and seasonal changes in diet shape the GI microbiota. We compare the gelada GI microbiota to those of other primate species, reporting a gradient from geladas to herbivorous specialist monkeys to dietary generalist monkeys and lastly humans, the ultimate ecological generalists. We also compare the microbiotas of the gelada GI tract and the sheep rumen, finding that geladas are highly enriched for cellulolytic bacteria associated with ruminant digestion, relative to other primates. This study represents the first analysis of the gelada GI microbiota, providing insights into the adaptations underlying graminivory in a primate. Our results also highlight the role of social organization in structuring the GI microbiota within a society of wild animals.
Developing soft skill training for salespersons to increase total sales
NASA Astrophysics Data System (ADS)
Mardatillah, A.; Budiman, I.; Tarigan, U. P. P.; Sembiring, A. C.; Hendi
2018-04-01
This research was conducted in the multilevel marketing industry. Unprofessional salespersons behavior and responsibility can ruin the image of the multilevel marketing industry and distrust to the multilevel marketing industry. This leads to decreased company revenue due to lack of public interest in multilevel marketing products. Seeing these conditions, researcher develop training programs to improve the competence of salespersons in making sales. It was done by looking at factors that affect the level of salespersons sales. The research analyzes several factors that influence the salesperson’s sales level: presentation skills, questioning ability, adaptability, technical knowledge, self-control, interaction involvement, sales environment, and intrapersonal skills. Through the analysis of these factors with One Sample T-Test and Multiple Linear Regression methods, researchers design a training program for salespersons to increase their sales. The developed training for salespersons is basic training and special training and before training was given, salespersons need to be assessed for the effectivity and efficiency reasons.
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Poonia, Mahaveer Prasad
2012-08-01
In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Butel, Jean; Braun, Kathryn L; Novotny, Rachel; Acosta, Mark; Castro, Rose; Fleming, Travis; Powers, Julianne; Nigg, Claudio R
2015-12-01
Addressing complex chronic disease prevention, like childhood obesity, requires a multi-level, multi-component culturally relevant approach with broad reach. Models are lacking to guide fidelity monitoring across multiple levels, components, and sites engaged in such interventions. The aim of this study is to describe the fidelity-monitoring approach of The Children's Healthy Living (CHL) Program, a multi-level multi-component intervention in five Pacific jurisdictions. A fidelity-monitoring rubric was developed. About halfway during the intervention, community partners were randomly selected and interviewed independently by local CHL staff and by Coordinating Center representatives to assess treatment fidelity. Ratings were compared and discussed by local and Coordinating Center staff. There was good agreement between the teams (Kappa = 0.50, p < 0.001), and intervention improvement opportunities were identified through data review and group discussion. Fidelity for the multi-level, multi-component, multi-site CHL intervention was successfully assessed, identifying adaptations as well as ways to improve intervention delivery prior to the end of the intervention.
Multilevel and Community-Level Interventions with Native Americans: Challenges and Opportunities.
Blue Bird Jernigan, Valarie; D'Amico, Elizabeth J; Duran, Bonnie; Buchwald, Dedra
2018-06-02
Multilevel and community-level interventions that target the social determinants of health and ultimately health disparities are seldom conducted in Native American communities. To contextualize the importance of multilevel and community-level interventions, major contributors to and causes of health disparities in Native communities are highlighted. Among the many documented socioeconomic factors influencing health are poverty, low educational attainment, and lack of insurance. Well-recognized health disparities include obesity, diabetes, and hypertension. Selected challenges of implementing community-level and multilevel interventions in Native communities are summarized such as the shortage of high-quality population health data and validated measurement tools. To address the lack of multilevel and community-level interventions, the National Institutes of Health created the Intervention Research to Improve Native American Health (IRINAH) program which solicits proposals that develop, adapt, and test strategies to address these challenges and create interventions appropriate for Native populations. A discussion of the strategies that four of the IRINAH grantees are implementing underscores the importance of community-based participatory policy work, the development of new partnerships, and reconnection with cultural traditions. Based on the work of the nearly 20 IRINAH grantees, ameliorating the complex social determinants of health disparities among Native people will require (1) support for community-level and multilevel interventions that examine contemporary and historical factors that shape current conditions; (2) sustainability plans; (3) forefronting the most challenging issues; (4) financial resources and time to collaborate with tribal leaders; and (5) a solid evidence base.
Tahoun, A H
2017-01-01
In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Arabidopsis roots and shoots show distinct temporal adaptation patterns toward nitrogen starvation.
Krapp, Anne; Berthomé, Richard; Orsel, Mathilde; Mercey-Boutet, Stéphanie; Yu, Agnes; Castaings, Loren; Elftieh, Samira; Major, Hilary; Renou, Jean-Pierre; Daniel-Vedele, Françoise
2011-11-01
Nitrogen (N) is an essential macronutrient for plants. N levels in soil vary widely, and plants have developed strategies to cope with N deficiency. However, the regulation of these adaptive responses and the coordinating signals that underlie them are still poorly understood. The aim of this study was to characterize N starvation in adult Arabidopsis (Arabidopsis thaliana) plants in a spatiotemporal manner by an integrative, multilevel global approach analyzing growth, metabolites, enzyme activities, and transcript levels. We determined that the remobilization of N and carbon compounds to the growing roots occurred long before the internal N stores became depleted. A global metabolite analysis by gas chromatography-mass spectrometry revealed organ-specific differences in the metabolic adaptation to complete N starvation, for example, for several tricarboxylic acid cycle intermediates, but also for carbohydrates, secondary products, and phosphate. The activities of central N metabolism enzymes and the capacity for nitrate uptake adapted to N starvation by favoring N remobilization and by increasing the high-affinity nitrate uptake capacity after long-term starvation. Changes in the transcriptome confirmed earlier studies and added a new dimension by revealing specific spatiotemporal patterns and several unknown N starvation-regulated genes, including new predicted small RNA genes. No global correlation between metabolites, enzyme activities, and transcripts was evident. However, this multilevel spatiotemporal global study revealed numerous new patterns of adaptation mechanisms to N starvation. In the context of a sustainable agriculture, this work will give new insight for the production of crops with increased N use efficiency.
Xu, Yulong; Zhang, Jingxue; Wang, Dunyou
2015-06-28
The CH3Cl + CN(-) reaction in water was studied using a multilevel quantum mechanics/molecular mechanics (MM) method with the multilevels, electrostatic potential, density functional theory (DFT) and coupled-cluster single double triple (CCSD(T)), for the solute region. The detailed, back-side attack SN2 reaction mechanism was mapped along the reaction pathway. The potentials of mean force were calculated under both the DFT and CCSD(T) levels for the reaction region. The CCSD(T)/MM level of theory presents a free energy activation barrier height at 20.3 kcal/mol, which agrees very well with the experiment value at 21.6 kcal/mol. The results show that the aqueous solution has a dominant role in shaping the potential of mean force. The solvation effect and the polarization effect together increase the activation barrier height by ∼11.4 kcal/mol: the solvation effect plays a major role by providing about 75% of the contribution, while polarization effect only contributes 25% to the activation barrier height. Our calculated potential of mean force under the CCSD(T)/MM also has a good agreement with the one estimated using data from previous gas-phase studies.
NASA Astrophysics Data System (ADS)
Xu, Yulong; Zhang, Jingxue; Wang, Dunyou
2015-06-01
The CH3Cl + CN- reaction in water was studied using a multilevel quantum mechanics/molecular mechanics (MM) method with the multilevels, electrostatic potential, density functional theory (DFT) and coupled-cluster single double triple (CCSD(T)), for the solute region. The detailed, back-side attack SN2 reaction mechanism was mapped along the reaction pathway. The potentials of mean force were calculated under both the DFT and CCSD(T) levels for the reaction region. The CCSD(T)/MM level of theory presents a free energy activation barrier height at 20.3 kcal/mol, which agrees very well with the experiment value at 21.6 kcal/mol. The results show that the aqueous solution has a dominant role in shaping the potential of mean force. The solvation effect and the polarization effect together increase the activation barrier height by ˜11.4 kcal/mol: the solvation effect plays a major role by providing about 75% of the contribution, while polarization effect only contributes 25% to the activation barrier height. Our calculated potential of mean force under the CCSD(T)/MM also has a good agreement with the one estimated using data from previous gas-phase studies.
NASA Technical Reports Server (NTRS)
Rybicki, G. B.; Hummer, D. G.
1991-01-01
A method is presented for solving multilevel transfer problems when nonoverlapping lines and background continuum are present and active continuum transfer is absent. An approximate lambda operator is employed to derive linear, 'preconditioned', statistical-equilibrium equations. A method is described for finding the diagonal elements of the 'true' numerical lambda operator, and therefore for obtaining the coefficients of the equations. Iterations of the preconditioned equations, in conjunction with the transfer equation's formal solution, are used to solve linear equations. Some multilevel problems are considered, including an eleven-level neutral helium atom. Diagonal and tridiagonal approximate lambda operators are utilized in the problems to examine the convergence properties of the method, and it is found to be effective for the line transfer problems.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
Application of a multi-level grid method to transonic flow calculations
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Brandt, A.
1976-01-01
A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.
NASA Astrophysics Data System (ADS)
Gill, Stuart P. D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.
2003-04-01
An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological "market" today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.
Lv, Jing; Zhang, Jingxue; Wang, Dunyou
2016-02-17
We employed a multi-level quantum mechanics and molecular mechanics approach to study the reaction NH2Cl + OH(-) in aqueous solution. The multi-level quantum method (including the DFT method with both the B3LYP and M06-2X exchange-correlation functionals and the CCSD(T) method, and both methods with the aug-cc-pVDZ basis set) was used to treat the quantum reaction region in different stages of the calculation in order to obtain an accurate potential of mean force. The obtained free energy activation barriers at the DFT/MM level of theory yielded a big difference of 21.8 kcal mol(-1) with the B3LYP functional and 27.4 kcal mol(-1) with the M06-2X functional respectively. Nonetheless, the barrier heights become very close when shifted from DFT to CCSD(T): 22.4 kcal mol(-1) and 22.9 kcal mol(-1) at CCSD(T)(B3LYP)/MM and CCSD(T)(M06-2X)/MM levels of theory, respectively. The free reaction energy obtained using CCSD(T)(M06-2X)/MM shows an excellent agreement with the one calculated using the available gas-phase data. Aqueous solution plays a significant role in shaping the reaction profile. In total, the water solution contributes 13.3 kcal mol(-1) and 14.6 kcal mol(-1) to the free energy barrier heights at CCSD(T)(B3LYP)/MM and CCSD(T)(M06-2X)/MM respectively. The title reaction at nitrogen is a faster reaction than the corresponding reaction at carbon, CH3Cl + OH(-).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael J. Bockelie
2002-01-04
This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requiresmore » a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International; -mesh adaption, data management, and parallelization software and technology being developed by users of the BoxLib library at LBNL; and -solution methods for problems formulated on block structured grids that were being developed in collaboration with technical staff members at the University of Utah Center for High Performance Computing (CHPC) and at LBNL. The combustion modeling software used by Reaction Engineering International represents an investment of over fifty man-years of development, conducted over a period of twenty years. Thus, it was impractical to achieve our objective by starting from scratch. The research program resulted in an adaptive grid, reacting CFD flow solver that can be used only on limited problems. In current form the code is appropriate for use on academic problems with simplified geometries. The new solver is not sufficiently robust or sufficiently general to be used in a ''production mode'' for industrial applications. The principle difficulty lies with the multi-level solver technology. The use of multi-level solvers on adaptive grids with embedded boundaries is not yet a mature field and there are many issues that remain to be resolved. From the lessons learned in this SBIR program, we have started work on a new flow solver with an AMR capability. The new code is based on a conventional cell-by-cell mesh refinement strategy used in unstructured grid solvers that employ hexahedral cells. The new solver employs several of the concepts and solution strategies developed within this research program. The formulation of the composite grid problem for the new solver has been designed to avoid the embedded boundary complications encountered in this SBIR project. This follow-on effort will result in a reacting flow CFD solver with localized mesh capability that can be used to perform engineering calculations on industrial problems in a production mode.« less
Erfani, Amir
2014-12-01
Studies investigating fertility decline in developing countries often adopt measures of determinants of fertility behavior developed based on observations from developed countries, without adapting them to the realities of the study setting. As a result, their findings are usually invalid, anomalous or statistically non-significant. This commentary draws on the research article by Moeeni and colleagues, as an exemplary work which has not adapted measures of two key economic determinants of fertility behavior, namely gender inequality and opportunity costs of childbearing, to the realities of Iran's economy. Measurement adaptations that can improve the study are discussed.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
Photon scattering from a system of multilevel quantum emitters. I. Formalism
NASA Astrophysics Data System (ADS)
Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.
2018-04-01
We introduce a formalism to solve the problem of photon scattering from a system of multilevel quantum emitters. Our approach provides a direct solution of the scattering dynamics. As such the formalism gives the scattered fields' amplitudes in the limit of a weak incident intensity. Our formalism is equipped to treat both multiemitter and multilevel emitter systems, and is applicable to a plethora of photon-scattering problems, including conditional state preparation by photodetection. In this paper, we develop the general formalism for an arbitrary geometry. In the following paper (part II) S. Das et al. [Phys. Rev. A 97, 043838 (2018), 10.1103/PhysRevA.97.043838], we reduce the general photon-scattering formalism to a form that is applicable to one-dimensional waveguides and show its applicability by considering explicit examples with various emitter configurations.
Classic Classroom Activities: The Oxford Picture Dictionary Program.
ERIC Educational Resources Information Center
Weiss, Renee; Adelson-Goldstein, Jayme; Shapiro, Norma
This teacher resource book offers over 100 reproducible communicative practice activities and 768 picture cards based on the vocabulary of the Oxford Picture Dictionary. Teacher's notes and instructions, including adaptations for multilevel classes, are provided. The activities book has up-to-date art and graphics, explaining over 3700 words. The…
Math Achievement: A Role Strain and Adaptation Approach
ERIC Educational Resources Information Center
Williams, Krystal L.; Burt, Brian A.; Hilton, Adriel A.
2016-01-01
Purpose: This study aims to better understand how students' academic strains and multilevel strengths relate to their math achievement, with a particular emphasis on underrepresented students of color and girls given the need to broaden science, technology, engineering and math (STEM) participation for these groups. Design/methodology/approach:…
FBILI method for multi-level line transfer
NASA Astrophysics Data System (ADS)
Kuzmanovska, O.; Atanacković, O.; Faurobert, M.
2017-07-01
Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.
Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures
NASA Technical Reports Server (NTRS)
Yungster, S.; Radhakrishnan, K.
2002-01-01
A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak
2012-01-01
The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memorymore » storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yulong; College of Physics and Electronics, Shandong Normal University, Jinan 250014; Zhang, Jingxue
2015-06-28
The CH{sub 3}Cl + CN{sup −} reaction in water was studied using a multilevel quantum mechanics/molecular mechanics (MM) method with the multilevels, electrostatic potential, density functional theory (DFT) and coupled-cluster single double triple (CCSD(T)), for the solute region. The detailed, back-side attack S{sub N}2 reaction mechanism was mapped along the reaction pathway. The potentials of mean force were calculated under both the DFT and CCSD(T) levels for the reaction region. The CCSD(T)/MM level of theory presents a free energy activation barrier height at 20.3 kcal/mol, which agrees very well with the experiment value at 21.6 kcal/mol. The results show thatmore » the aqueous solution has a dominant role in shaping the potential of mean force. The solvation effect and the polarization effect together increase the activation barrier height by ∼11.4 kcal/mol: the solvation effect plays a major role by providing about 75% of the contribution, while polarization effect only contributes 25% to the activation barrier height. Our calculated potential of mean force under the CCSD(T)/MM also has a good agreement with the one estimated using data from previous gas-phase studies.« less
On-Chip Hardware for Cell Monitoring: Contact Imaging and Notch Filtering
2005-07-07
a polymer carrier. Spectrophotometer chosen and purchased for testing optical filters and materials. Characterization and comparison of fabricated...reproducibility of behavior. Multi-level SU8 process developed. Optimization of actuator for closing vial lids and development of lid sealing technology is...bending angles characterized as a function of temperature in NaDBS solution. " Photopatternable polymers are a viable interim packaging solution; through
NASA Astrophysics Data System (ADS)
Hwang, Ihn; Wang, Wei; Hwang, Sun Kak; Cho, Sung Hwan; Kim, Kang Lib; Jeong, Beomjin; Huh, June; Park, Cheolmin
2016-05-01
The characteristic source-drain current hysteresis frequently observed in field-effect transistors with networked single walled carbon-nanotube (NSWNT) channels is problematic for the reliable switching and sensing performance of devices. But the two distinct current states of the hysteresis curve at a zero gate voltage can be useful for memory applications. In this work, we demonstrate a novel non-volatile transistor memory with solution-processed NSWNTs which are suitable for multilevel data programming and reading. A polymer passivation layer with a small amount of water employed on the top of the NSWNT channel serves as an efficient gate voltage dependent charge trapping and de-trapping site. A systematic investigation evidences that the water mixed in a polymer passivation solution is critical for reliable non-volatile memory operation. The optimized device is air-stable and temperature-resistive up to 80 °C and exhibits excellent non-volatile memory performance with an on/off current ratio greater than 104, a switching time less than 100 ms, data retention longer than 4000 s, and write/read endurance over 100 cycles. Furthermore, the gate voltage dependent charge injection mediated by water in the passivation layer allowed for multilevel operation of our memory in which 4 distinct current states were programmed repetitively and preserved over a long time period.The characteristic source-drain current hysteresis frequently observed in field-effect transistors with networked single walled carbon-nanotube (NSWNT) channels is problematic for the reliable switching and sensing performance of devices. But the two distinct current states of the hysteresis curve at a zero gate voltage can be useful for memory applications. In this work, we demonstrate a novel non-volatile transistor memory with solution-processed NSWNTs which are suitable for multilevel data programming and reading. A polymer passivation layer with a small amount of water employed on the top of the NSWNT channel serves as an efficient gate voltage dependent charge trapping and de-trapping site. A systematic investigation evidences that the water mixed in a polymer passivation solution is critical for reliable non-volatile memory operation. The optimized device is air-stable and temperature-resistive up to 80 °C and exhibits excellent non-volatile memory performance with an on/off current ratio greater than 104, a switching time less than 100 ms, data retention longer than 4000 s, and write/read endurance over 100 cycles. Furthermore, the gate voltage dependent charge injection mediated by water in the passivation layer allowed for multilevel operation of our memory in which 4 distinct current states were programmed repetitively and preserved over a long time period. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00505e
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
New Bandwidth Efficient Parallel Concatenated Coding Schemes
NASA Technical Reports Server (NTRS)
Denedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.
1996-01-01
We propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable iterative decoding structure. Examples are given for throughputs 2 bits/sec/Hz with 8PSK and 16QAM signal constellations.
Armengaud, Patrick; Sulpice, Ronan; Miller, Anthony J; Stitt, Mark; Amtmann, Anna; Gibon, Yves
2009-06-01
Potassium (K) is required in large quantities by growing crops, but faced with high fertilizer prices, farmers often neglect K application in favor of nitrogen and phosphorus. As a result, large areas of farmland are now depleted of K. K deficiency affects the metabolite content of crops with negative consequences for nutritional quality, mechanical stability, and pathogen/pest resistance. Known functions of K in solute transport, protein synthesis, and enzyme activation point to a close relationship between K and metabolism, but it is unclear which of these are the most critical ones and should be targeted in biotechnological efforts to improve K usage efficiency. To identify metabolic targets and signaling components of K stress, we adopted a multilevel approach combining transcript profiles with enzyme activities and metabolite profiles of Arabidopsis (Arabidopsis thaliana) plants subjected to low K and K resupply. Roots and shoots were analyzed separately. Our results show that regulation of enzymes at the level of transcripts and proteins is likely to play an important role in plant adaptation to K deficiency by (1) maintaining carbon flux into amino acids and proteins, (2) decreasing negative metabolic charge, and (3) increasing the nitrogen-carbon ratio in amino acids. However, changes in transcripts and enzyme activities do not explain the strong and reversible depletion of pyruvate and accumulation of sugars observed in the roots of low-K plants. We propose that the primary cause of metabolic disorders in low-K plants resides in the direct inhibition of pyruvate kinase activity by low cytoplasmic K in root cells.
Teaching All the Children: Stories from the Classroom.
ERIC Educational Resources Information Center
Leavitt, Midge, Ed.
This book presents personal narratives from New Brunswick teachers concerning their experiences in trying to meet the academic and social needs of children in multi-level classes. Topics include the first year of teaching, the process of adapting for students who have special needs, cooperative learning in the class, integration experiences,…
Learning Qualitative and Quantitative Reasoning in a Microworld for Elastic Impacts.
ERIC Educational Resources Information Center
Ploetzner, Rolf; And Others
1990-01-01
Discusses the artificial-intelligence-based microworld DiBi and MULEDS, a multilevel diagnosis system. Developed to adapt tutoring style to the individual learner. Explains that DiBi sets up a learning environment, and simulates elastic impacts as a subtopic of classical mechanics, and supporting reasoning on different levels of mental domain…
Liu, Peng; Zhang, Jingxue; Wang, Dunyou
2017-06-07
A double-inversion mechanism of the F - + CH 3 I reaction was discovered in aqueous solution using combined multi-level quantum mechanics theories and molecular mechanics. The stationary points along the reaction path show very different structures to the ones in the gas phase due to the interactions between the solvent and solute, especially strong hydrogen bonds. An intermediate complex, a minimum on the potential of mean force, was found to serve as a connecting-link between the abstraction-induced inversion transition state and the Walden-inversion transition state. The potentials of mean force were calculated with both the DFT/MM and CCSD(T)/MM levels of theory. Our calculated free energy barrier of the abstraction-induced inversion is 69.5 kcal mol -1 at the CCSD(T)/MM level of theory, which agrees with the one at 72.9 kcal mol -1 calculated using the Born solvation model and gas-phase data; and our calculated free energy barrier of the Walden inversion is 24.2 kcal mol -1 , which agrees very well with the experimental value at 25.2 kcal mol -1 in aqueous solution. The calculations show that the aqueous solution makes significant contributions to the potentials of mean force and exerts a big impact on the molecular-level evolution along the reaction pathway.
NASA Astrophysics Data System (ADS)
Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.
2018-04-01
In a preceding paper we introduced a formalism to study the scattering of low-intensity fields from a system of multilevel emitters embedded in a three-dimensional (3 D ) dielectric medium. Here we show how this photon-scattering relation can be used to analyze the scattering of single photons and weak coherent states from any generic multilevel quantum emitter coupled to a one-dimensional (1 D ) waveguide. The reduction of the photon-scattering relation to 1 D waveguides provides a direct solution of the scattering problem involving low-intensity fields in the waveguide QED regime. To show how our formalism works, we consider examples of multilevel emitters and evaluate the transmitted and reflected field amplitude. Furthermore, we extend our study to include the dynamical response of the emitters for scattering of a weak coherent photon pulse. As our photon-scattering relation is based on the Heisenberg picture, it is quite useful for problems involving photodetection in the waveguide architecture. We show this by considering a specific problem of state generation by photodetection in a multilevel emitter, where our formalism exhibits its full potential. Since the considered emitters are generic, the 1 D results apply to a plethora of physical systems such as atoms, ions, quantum dots, superconducting qubits, and nitrogen-vacancy centers coupled to a 1 D waveguide or transmission line.
Kwon, S C; Patel, S; Choy, C; Zanowiak, J; Rideout, C; Yi, S; Wyatt, L; Taher, M D; Garcia-Dia, M J; Kim, S S; Denholm, T K; Kavathe, R; Islam, N S
2017-09-01
Faith-based organizations (FBOs) (e.g., churches, mosques, and gurdwaras) can play a vital role in health promotion. The Racial and Ethnic Approaches to Community Health for Asian Americans (REACH FAR) Project is implementing a multi-level and evidence-based health promotion and hypertension (HTN) control program in faith-based organizations serving Asian American (AA) communities (Bangladeshi, Filipino, Korean, Asian Indian) across multiple denominations (Christian, Muslim, and Sikh) in New York/New Jersey (NY/NJ). This paper presents baseline results and describes the cultural adaptation and implementation process of the REACH FAR program across diverse FBOs and religious denominations serving AA subgroups. Working with 12 FBOs, informed by implementation research and guided by a cultural adaptation framework and community-engaged approaches, REACH FAR strategies included (1) implementing healthy food policies for communal meals and (2) delivering a culturally-linguistically adapted HTN management coaching program. Using the Ecological Validity Model (EVM), the program was culturally adapted across congregation and faith settings. Baseline measures include (i) Congregant surveys assessing social norms and diet (n = 946), (ii) HTN participant program surveys (n = 725), (iii) FBO environmental strategy checklists (n = 13), and (iv) community partner in-depth interviews assessing project feasibility (n = 5). We describe the adaptation process and baseline assessments of FBOs. In year 1, we reached 3790 (nutritional strategies) and 725 (HTN program) via AA FBO sites. Most AA FBOs lack nutrition policies and present prime opportunities for evidence-based multi-level interventions. REACH FAR presents a promising health promotion implementation program that may result in significant community reach.
Cairney, Paul; Oliver, Kathryn
2017-04-26
There is extensive health and public health literature on the 'evidence-policy gap', exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems.We identify insights from secondary data, namely systematic reviews, critical analysis and policy theories relevant to evidence-based policymaking. The studies are drawn primarily from countries such as the United States, United Kingdom, Canada, Australia and New Zealand. We combine empirical and normative elements to identify the ways in which scientists can, do and could influence policy.We identify two important dilemmas, for scientists and researchers, that arise from our initial advice. First, effective actors combine evidence with manipulative emotional appeals to influence the policy agenda - should scientists do the same, or would the reputational costs outweigh the policy benefits? Second, when adapting to multi-level policymaking, should scientists prioritise 'evidence-based' policymaking above other factors? The latter includes governance principles such the 'co-production' of policy between local public bodies, interest groups and service users. This process may be based primarily on values and involve actors with no commitment to a hierarchy of evidence.We conclude that successful engagement in 'evidence-based policymaking' requires pragmatism, combining scientific evidence with governance principles, and persuasion to translate complex evidence into simple stories. To maximise the use of scientific evidence in health and public health policy, researchers should recognise the tendency of policymakers to base judgements on their beliefs, and shortcuts based on their emotions and familiarity with information; learn 'where the action is', and be prepared to engage in long-term strategies to be able to influence policy; and, in both cases, decide how far you are willing to go to persuade policymakers to act and secure a hierarchy of evidence underpinning policy. These are value-driven and political, not just 'evidence-based', choices.
Computerized Adaptive Testing with Item Clones. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; van der Linden, Wim J.
To reduce the cost of item writing and to enhance the flexibility of item presentation, items can be generated by item-cloning techniques. An important consequence of cloning is that it may cause variability on the item parameters. Therefore, a multilevel item response model is presented in which it is assumed that the item parameters of a…
Bray, Jeremy W.; Kelly, Erin L.; Hammer, Leslie B.; Almeida, David M.; Dearing, James W.; King, Rosalind B.; Buxton, Orfeu M.
2013-01-01
Recognizing a need for rigorous, experimental research to support the efforts of workplaces and policymakers in improving the health and wellbeing of employees and their families, the National Institutes of Health and the Centers for Disease Control and Prevention formed the Work, Family & Health Network (WFHN). The WFHN is implementing an innovative multisite study with a rigorous experimental design (adaptive randomization, control groups), comprehensive multilevel measures, a novel and theoretically based intervention targeting the psychosocial work environment, and translational activities. This paper describes challenges and benefits of designing a multilevel and transdisciplinary research network that includes an effectiveness study to assess intervention effects on employees, families, and managers; a daily diary study to examine effects on family functioning and daily stress; a process study to understand intervention implementation; and translational research to understand and inform diffusion of innovation. Challenges were both conceptual and logistical, spanning all aspects of study design and implementation. In dealing with these challenges, however, the WFHN developed innovative, transdisciplinary, multi-method approaches to conducting workplace research that will benefit both the research and business communities. PMID:24618878
Liu, Peng; Li, Chen; Wang, Dunyou
2017-10-19
The Cl - + CH 3 I → CH 3 Cl + I - reaction in water was studied using combined multilevel quantum mechanism theories and molecular mechanics with an explicit water solvent model. The study shows a significant influence of aqueous solution on the structures of the stationary points along the reaction pathway. A detailed, atomic-level evolution of the reaction mechanism shows a concerted one-bond-broken and one-bond-formed mechanism, as well as a synchronized charge-transfer process. The potentials of mean force calculated with the CCSD(T) and DFT treatments of the solute produce a free activation barrier at 24.5 and 19.0 kcal/mol, respectively, which agrees with the experimental one at 22.0 kcal/mol. The solvent effects have also been quantitatively analyzed: in total, the solvent effects raise the activation energy by 20.2 kcal/mol, which shows a significant impact on this reaction in water.
NASA Astrophysics Data System (ADS)
Rahman, Md. Saifur; Lee, Yiu-Yin
2017-10-01
In this study, a new modified multi-level residue harmonic balance method is presented and adopted to investigate the forced nonlinear vibrations of axially loaded double beams. Although numerous nonlinear beam or linear double-beam problems have been tackled and solved, there have been few studies of this nonlinear double-beam problem. The geometric nonlinear formulations for a double-beam model are developed. The main advantage of the proposed method is that a set of decoupled nonlinear algebraic equations is generated at each solution level. This heavily reduces the computational effort compared with solving the coupled nonlinear algebraic equations generated in the classical harmonic balance method. The proposed method can generate the higher-level nonlinear solutions that are neglected by the previous modified harmonic balance method. The results from the proposed method agree reasonably well with those from the classical harmonic balance method. The effects of damping, axial force, and excitation magnitude on the nonlinear vibrational behaviour are examined.
Quantitative Comparisons of a Coarse-Grid LES with Experimental Data for Backward-Facing Step Flow
NASA Astrophysics Data System (ADS)
McDonough, J. M.
1999-11-01
A novel approach to LES employing an additive decomposition of both solutions and governing equations (similar to ``multi-level'' approaches of Dubois et al.,Dynamic Multilevel Methods and the Simulation of Turbulence, Cambridge University Press, 1999) is presented; its main structural features are lack of filtering of governing equations (instead, solutions are filtered to remove aliasing due to under resolution) and direct modeling of subgrid-scale primitive variables (rather than modeling their correlations) in the manner proposed by Hylin and McDonough (Int. J. Fluid Mech. Res. 26, 228-256, 1999). A 2-D implementation of this formalism is applied to the backward-facing step flow studied experimentally by Driver and Seegmiller (AIAA J. 23, 163-171, 1985) and Driver et al. (AIAA J. 25, 914-919, 1987), and run on grids sufficiently coarse to permit easy extension to 3-D, industrially-realistic problems. Comparisons of computed and experimental mean quantities (velocity profiles, turbulence kinetic energy, reattachment lengths, etc.) and effects of grid refinement will be presented.
Kukafka, Rita; Allegrante, John P; Khan, Sharib; Bigger, J Thomas; Johnson, Stephen B
2013-09-01
Solutions are employed to support clinical research trial tasks in community-based practice settings. Using the IT Implementation Framework (ITIF), an integrative framework intended to guide the synthesis of theoretical perspectives for planning multi-level interventions to enhance IT use, we sought to understand the barriers and facilitators to clinical research in community-based practice settings preliminary to implementing new informatics solutions for improving clinical research infrastructure. The studies were conducted in practices within the Columbia University Clinical Trials Network. A mixed-method approach, including surveys, interviews, time-motion studies, and observations was used. The data collected, which incorporates predisposing, enabling, and reinforcing factors in IT use, were analyzed according to each phase of ITIF. Themes identified in the first phase of ITIF were 1) processes and tools to support clinical trial research and 2) clinical research peripheral to patient care processes. Not all of the problems under these themes were found to be amenable to IT solutions. Using the multi-level orientation of the ITIF, we set forth strategies beyond IT solutions that can have an impact on reengineering clinical research tasks in practice-based settings. Developing strategies to target enabling and reinforcing factors, which focus on organizational factors, and the motivation of the practice at large to use IT solutions to integrate clinical research tasks with patient care processes, is most challenging. The ITIF should be used to consider both IT and non-IT solutions concurrently for reengineering of clinical research in community-based practice settings. © 2013.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Multilevel description of the DNA molecule translocation in solid-state synthetic nanopores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nosik, V. L., E-mail: v-nosik@yandex.ru; Rudakova, E. B.
2016-07-15
Interest of researchers in micro- and nanofluidics of polymer solutions and, in particular, DNA ionic solutions is constantly increasing. The use of DNA translocation with a controlled velocity through solid-state nanopores and pulsed X-ray beams in new sequencing schemes opens up new possibilities for studying the structure of DNA and other biopolymers. The problems related to the description of DNA molecular motion in a limited volume of nanopore are considered.
ERIC Educational Resources Information Center
Chiu, Ming Ming; Pong, Suet-ling; Mori, Izumi; Chow, Bonnie Wing-Yin
2012-01-01
Central to student learning and academic success, the school engagement of immigrant children also reflects their adaptation to a primary institution in their new country. Analysis of questionnaire responses of 276,165 fifteen-year-olds (50% female) and their 10,789 school principals in 41 countries showed that school engagement has distinct,…
Flexible Coordination in Resource-Constrained Domains
1994-07-01
Experiments (TIEs) with planning technologies developed at both BBN (FMERG) and SRI ( SOCAP ). We have also exported scheduling support capabilities provided by...SRI’s SOCAP course of action (COA) plan generator. "* Development and demonstration of distributed, multi-level deployment scheduling - Through analysis...scheduler was adapted for integration with the SOCAP planning system to provide feedback on transportation feasibility during generation of the
ERIC Educational Resources Information Center
ADAMS, CHARLES F.
IN 1965 TEN NEGRO AND PUERTO RICAN GIRLS BEGAN CLERICAL TRAINING IN THE NATIONAL ASSOCIATION OF MANUFACTURERS (NAM) TYPING LABORATORY I (TEELAB-I), A PILOT PROJECT TO DEVELOP A SYSTEM OF TRAINING TYPISTS WITHIN THE INDUSTRIAL ENVIRONMENT. THE INITIAL SYSTEM, AN ADAPTATION OF GREGG AUDIO MATERIALS TO A MACHINE TECHNOLOGY, TAUGHT ACCURACY, SPEED…
Hogeweg, Paulien
2012-01-01
Most of evolutionary theory has abstracted away from how information is coded in the genome and how this information is transformed into traits on which selection takes place. While in the earliest stages of biological evolution, in the RNA world, the mapping from the genotype into function was largely predefined by the physical-chemical properties of the evolving entities (RNA replicators, e.g. from sequence to folded structure and catalytic sites), in present-day organisms, the mapping itself is the result of evolution. I will review results of several in silico evolutionary studies which examine the consequences of evolving the genetic coding, and the ways this information is transformed, while adapting to prevailing environments. Such multilevel evolution leads to long-term information integration. Through genome, network, and dynamical structuring, the occurrence and/or effect of random mutations becomes nonrandom, and facilitates rapid adaptation. This is what does happen in the in silico experiments. Is it also what did happen in biological evolution? I will discuss some data that suggest that it did. In any case, these results provide us with novel search images to tackle the wealth of biological data.
Group adaptation, formal darwinism and contextual analysis.
Okasha, S; Paternotte, C
2012-06-01
We consider the question: under what circumstances can the concept of adaptation be applied to groups, rather than individuals? Gardner and Grafen (2009, J. Evol. Biol.22: 659-671) develop a novel approach to this question, building on Grafen's 'formal Darwinism' project, which defines adaptation in terms of links between evolutionary dynamics and optimization. They conclude that only clonal groups, and to a lesser extent groups in which reproductive competition is repressed, can be considered as adaptive units. We re-examine the conditions under which the selection-optimization links hold at the group level. We focus on an important distinction between two ways of understanding the links, which have different implications regarding group adaptationism. We show how the formal Darwinism approach can be reconciled with G.C. Williams' famous analysis of group adaptation, and we consider the relationships between group adaptation, the Price equation approach to multi-level selection, and the alternative approach based on contextual analysis. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Conflict Resolution in Mexican-Origin Couples: Culture, Gender, and Marital Quality
ERIC Educational Resources Information Center
Wheeler, Lorey A.; Updegraff, Kimberly A.; Thayer, Shawna M.
2010-01-01
This study examined associations between Mexican-origin spouses' conflict resolution strategies (i.e., nonconfrontation, solution orientation, and control) and (a) gender-typed qualities and attitudes, (b) cultural orientations, and (c) marital quality in a sample of 227 couples. Results of multilevel modeling revealed that Mexican cultural…
Multilevel Confirmatory Factor Analysis of the Teacher My Class Inventory-Short Form
ERIC Educational Resources Information Center
Villares, Elizabeth; Mariani, Melissa; Sink, Christopher A.; Colvin, Kimberly
2016-01-01
Researchers analyzed data from elementary teachers (N = 233) to further establish the psychometric soundness of the Teacher My Class Inventory-Short Form. Supporting previous psychometric research, confirmatory factor analyses findings supported the factorial validity of the hypothesized five-factor solution. Internal reliability estimates were…
3D GGO candidate extraction in lung CT images using multilevel thresholding on supervoxels
NASA Astrophysics Data System (ADS)
Huang, Shan; Liu, Xiabi; Han, Guanghui; Zhao, Xinming; Zhao, Yanfeng; Zhou, Chunwu
2018-02-01
The earlier detection of ground glass opacity (GGO) is of great importance since GGOs are more likely to be malignant than solid nodules. However, the detection of GGO is a difficult task in lung cancer screening. This paper proposes a novel GGO candidate extraction method, which performs multilevel thresholding on supervoxels in 3D lung CT images. Firstly, we segment the lung parenchyma based on Otsu algorithm. Secondly, the voxels which are adjacent in 3D discrete space and sharing similar grayscale are clustered into supervoxels. This procedure is used to enhance GGOs and reduce computational complexity. Thirdly, Hessian matrix is used to emphasize focal GGO candidates. Lastly, an improved adaptive multilevel thresholding method is applied on segmented clusters to extract GGO candidates. The proposed method was evaluated on a set of 19 lung CT scans containing 166 GGO lesions from the Lung CT Imaging Signs (LISS) database. The experimental results show that our proposed GGO candidate extraction method is effective, with a sensitivity of 100% and 26.3 of false positives per scan (665 GGO candidates, 499 non-GGO regions and 166 GGO regions). It can handle both focal GGOs and diffuse GGOs.
Climate Shocks and the Timing of Migration from Mexico
Nawrotzki, Raphael J.; DeWaard, Jack
2016-01-01
Although evidence is increasing that climate shocks influence human migration, it is unclear exactly when people migrate after a climate shock. A climate shock might be followed by an immediate migration response. Alternatively, migration, as an adaptive strategy of last resort, might be delayed and employed only after available in-situ (in-place) adaptive strategies are exhausted. In this paper, we explore the temporally lagged association between a climate shock and future migration. Using multilevel event-history models, we analyze the risk of Mexico-U.S. migration over a seven-year period after a climate shock. Consistent with a delayed response pattern, we find that the risk of migration is low immediately after a climate shock and increases as households pursue and cycle through in-situ adaptive strategies available to them. However, about three years after the climate shock, the risk of migration decreases, suggesting that households are eventually successful in adapting in-situ. PMID:27795604
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
Reciprocal Exchange Patterned by Market Forces Helps Explain Cooperation in a Small-Scale Society.
Jaeggi, Adrian V; Hooper, Paul L; Beheim, Bret A; Kaplan, Hillard; Gurven, Michael
2016-08-22
Social organisms sometimes depend on help from reciprocating partners to solve adaptive problems [1], and individual cooperation strategies should aim to offer high supply commodities at low cost to the donor in exchange for high-demand commodities with large return benefits [2, 3]. Although such market dynamics have been documented in some animals [4-7], naturalistic studies of human cooperation are often limited by focusing on single commodities [8]. We analyzed cooperation in five domains (meat sharing, produce sharing, field labor, childcare, and sick care) among 2,161 household dyads of Tsimane' horticulturalists, using Bayesian multilevel models and information-theoretic model comparison. Across domains, the best-fit models included kinship and residential proximity, exchanges in kind and across domains, measures of supply and demand and their interactions with exchange, and household-specific exchange slopes. In these best models, giving, receiving, and reciprocating were to some extent shaped by market forces, and reciprocal exchange across domains had a strong partial effect on cooperation independent of more exogenous factors like kinship and proximity. Our results support the view that reciprocal exchange can provide a reliable solution to adaptive problems [8-11]. Although individual strategies patterned by market forces may generate gains from trade in any species [3], humans' slow life history and skill-intensive foraging niche favor specialization and create interdependence [12, 13], thus stabilizing cooperation and fostering divisions of labor even in informal economies [14, 15]. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, S. H.; Xia, X. H.; Wang, Y. D.; Wang, X. L.; Tu, J. P.
2017-02-01
It is a core task to find solutions to suppress the "shuttle effect" of polysulfides and improve high rate capability at the sulfur cathode of lithium sulfur batteries. Herein we first time propose a concept of multileveled blocking "dams" to suppress the diffusion of polysulfides. We report a facile and effective strategy to construct multidimensional conductive carbon hosts for accommodation of active sulfur. Multidimensional ternary carbon networks (MTCNs) with 0D nanospheres, 1D nanotubes and 2D nanoflakes are organically combined together to provide multileveled conductive channels to reserve active sulfur and promote stable sustained reactions. In the light of enhanced conductivity and multileveled blocking "dams" for polysulfides, the designed MTCNs/S cathode has been demonstrated with noticeable improvement in discharge capacity (1472 mAh g-1 at 0.l C) and long-term cycling stability (65% retention at 5.0 C after 500 cycles). Our research may provide a new insight in the gradient blocking of polysulfides with the help of multidimensional carbon networks.
Adaptive Multilevel Middleware for Object Systems
2006-12-01
the system at the system-call level or using the CORBA-standard Extensible Transport Framework ( ETF ). Transparent insertion is highly desirable from an...often as it needs to. This is remedied by using the real-time scheduling class in a stock Linux kernel. We used schedsetscheduler system call (with...real-time scheduling class (SCHEDFIFO) for all the ML-NFD programs, later experiments with CPU load indicate that a stock Linux kernel is not
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
An Adaptive Multilevel Security Framework for the Data Stored in Cloud Environment
Dorairaj, Sudha Devi; Kaliannan, Thilagavathy
2015-01-01
Cloud computing is renowned for delivering information technology services based on internet. Nowadays, organizations are interested in moving their massive data and computations into cloud to reap their significant benefits of on demand service, resource pooling, and rapid elasticity that helps to satisfy the dynamically changing infrastructure demand without the burden of owning, managing, and maintaining it. Since the data needs to be secured throughout its life cycle, security of the data in cloud is a major challenge to be concentrated on because the data is in third party's premises. Any uniform simple or high level security method for all the data either compromises the sensitive data or proves to be too costly with increased overhead. Any common multiple method for all data becomes vulnerable when the common security pattern is identified at the event of successful attack on any information and also encourages more attacks on all other data. This paper suggests an adaptive multilevel security framework based on cryptography techniques that provide adequate security for the classified data stored in cloud. The proposed security system acclimates well for cloud environment and is also customizable and more reliant to meet the required level of security of data with different sensitivity that changes with business needs and commercial conditions. PMID:26258165
An Adaptive Multilevel Security Framework for the Data Stored in Cloud Environment.
Dorairaj, Sudha Devi; Kaliannan, Thilagavathy
2015-01-01
Cloud computing is renowned for delivering information technology services based on internet. Nowadays, organizations are interested in moving their massive data and computations into cloud to reap their significant benefits of on demand service, resource pooling, and rapid elasticity that helps to satisfy the dynamically changing infrastructure demand without the burden of owning, managing, and maintaining it. Since the data needs to be secured throughout its life cycle, security of the data in cloud is a major challenge to be concentrated on because the data is in third party's premises. Any uniform simple or high level security method for all the data either compromises the sensitive data or proves to be too costly with increased overhead. Any common multiple method for all data becomes vulnerable when the common security pattern is identified at the event of successful attack on any information and also encourages more attacks on all other data. This paper suggests an adaptive multilevel security framework based on cryptography techniques that provide adequate security for the classified data stored in cloud. The proposed security system acclimates well for cloud environment and is also customizable and more reliant to meet the required level of security of data with different sensitivity that changes with business needs and commercial conditions.
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
Investigation of the effects of color on judgments of sweetness using a taste adaptation method.
Hidaka, Souta; Shimoda, Kazumasa
2014-01-01
It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.
Cicchetti, Dante
2016-01-01
Developmental theories can be affirmed, challenged, and augmented by incorporating knowledge about atypical ontogenesis. Investigations of the biological, socioemotional, and personality development in individuals with high-risk conditions and psychopathological disorders can provide an entrée into the study of system organization, disorganization, and reorganization. This article examines child maltreatment to illustrate the benefit that can be derived from the study of individuals subjected to nonnormative caregiving experiences. Relative to an average expectable environment, which consists of a species-specific range of environmental conditions that support adaptive development among genetically normal individuals, maltreating families fail to provide many of the experiences that are required for normal development. Principles gleaned from the field of developmental psychopathology provide a framework for understanding multilevel functioning in normality and pathology. Knowledge of normative developmental processes provides the impetus to design and implement randomized control trial (RCT) interventions that can promote resilient functioning in maltreated children.
Women in HIV cure research: multilevel interventions to improve sex equity in recruitment.
Grewe, Mary E; Ma, Yuntong; Gilbertson, Adam; Rennie, Stuart; Tucker, Joseph D
Women are underrepresented in HIV cure research. In this paper we discuss the rationale for including women and propose multilevel strategies to improve sex equity in HIV cure research. The inadequate inclusion of women in HIV cure research is concerning for both scientific and ethical reasons. Biological responses to HIV and HIV treatment, as well as social contexts, differ between men and women, and this may affect the efficacy of curative interventions. Strategies for improving sex equity in HIV cure research include addressing eligibility criteria, adapting recruitment strategies, engaging community members early in the research process, and promoting funder policy changes. We conclude by describing the Gender, Race, and Clinical Experience (GRACE) study, which is one example of how women can be effectively recruited into HIV-related clinical trials. While HIV cure research is currently in the early stages, as it continues to develop it is important to mobilise for adequate inclusion of women.
Explicit and implicit emotion regulation: a multi-level framework
Braunstein, Laura Martin; Gross, James J
2017-01-01
Abstract The ability to adaptively regulate emotion is essential for mental and physical well-being. How should we organize the myriad ways people attempt to regulate their emotions? We explore the utility of a framework that distinguishes among four fundamental classes of emotion regulation strategies. The framework describes each strategy class in terms their behavioral characteristics, underlying psychological processes and supporting neural systems. A key feature of this multi-level framework is its conceptualization of the psychological processes in terms of two orthogonal dimensions that describe (i) the nature of the emotion regulation goal (ranging from to implicit to explicit) and (ii) the nature of the emotion change process (ranging from more automatic to more controlled). After describing the core elements of the framework, we use it to review human and animal research on the neural bases of emotion regulation and to suggest key directions for future research on emotion regulation. PMID:28981910
Using Evaluation Research as a Means for Policy Analysis in a "New" Mission-Oriented Policy Context
ERIC Educational Resources Information Center
Amanatidou, Effie; Cunningham, Paul; Gök, Abdullah; Garefi, Ioanna
2014-01-01
Grand challenges stress the importance of multi-disciplinary research, a multi-actor approach in examining the current state of affairs and exploring possible solutions, multi-level governance and policy coordination across geographical boundaries and policy areas, and a policy environment for enabling change both in science and technology and in…
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
Array-based Hierarchical Mesh Generation in Parallel
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2015-11-03
In this paper, we describe an array-based hierarchical mesh generation capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial mesh that can be used for a number of purposes such as multi-level methods to generating large meshes. The capability is developed under the parallel mesh framework “Mesh Oriented dAtaBase” a.k.a MOAB. We describe the underlying data structures and algorithms to generate such hierarchies and present numerical results for computational efficiency and mesh quality. Inmore » conclusion, we also present results to demonstrate the applicability of the developed capability to a multigrid finite-element solver.« less
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.
MAG3D and its application to internal flowfield analysis
NASA Technical Reports Server (NTRS)
Lee, K. D.; Henderson, T. L.; Choo, Y. K.
1992-01-01
MAG3D (multiblock adaptive grid, 3D) is a 3D solution-adaptive grid generation code which redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. The code is applicable to structured grids with a multiblock topology. It is independent of the original grid generator and the flow solver. The code uses the coordinates of an initial grid and the flow solution interpolated onto the new grid. MAG3D uses a numerical mapping and potential theory to modify the grid distribution based on properties of the flow solution on the initial grid. The adaptation technique is discussed, and the capability of MAG3D is demonstrated with several internal flow examples. Advantages of using solution-adaptive grids are also shown by comparing flow solutions on adaptive grids with those on initial grids.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.
The role of U.S. states in facilitating effective water governance under stress and change
NASA Astrophysics Data System (ADS)
Kirchhoff, Christine J.; Dilling, Lisa
2016-04-01
Worldwide water governance failures undermine effective water management under uncertainty and change. Overcoming these failures requires employing more adaptive, resilient water management approaches; yet, while scholars have advance theory of what adaptive, resilient approaches should be, there is little empirical evidence to support those normative propositions. To fill this gap, we reviewed the literature to derive theorized characteristics of adaptive, resilient water governance including knowledge generation and use, participation, clear rules for water use, and incorporating nonstationarity. Then, using interviews and documentary analysis focused on five U.S. states' allocation and planning approaches, we examined empirically if embodying these characteristics made states more (or less) adaptive and resilient in practice. We found that adaptive, resilient water governance requires not just possessing these characteristics but combining and building on them. That is, adaptive, resilient water governance requires well-funded, transparent knowledge systems combined with broad, multilevel participatory processes that support learning, strong institutional arrangements that establish authorities and rules and that allow flexibility as conditions change, and resources for integrated planning and allocation. We also found that difficulty incorporating climate change or altering existing water governance paradigms and inadequate funding of water programs undermine adaptive, resilient governance.
Considerations in representing human individuals in social ecological models
Manfredo, Michael J.; Teel, Tara L.; Gavin, Michael C.; Fulton, David C.
2017-01-01
In this chapter we focus on how to integrate the human individual into social-ecological systems analysis, and how to improve research on individual thought and action regarding the environment by locating it within the broader social-ecological context. We discuss three key questions as considerations for future research: (1) is human thought conceptualized as a dynamic and adaptive process, (2) is the individual placed in a multi-level context (including within-person levels, person-group interactions, and institutional and structural factors), and (3) is human thought seen as mutually constructed with the social and natural environment. Increased emphasis on the individual will be essential if we are to understand agency, innovation, and adaptation in social-ecological systems.
Low Rate Transmission of Video Signals Using Adaptive Delta Modulation.
1983-08-15
903-80-C-0476 / . PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS Research Foundation of CUNY on...transmitted phase, for the demodulation of the signal. AM is presently being used to transmit multilevel * symbols by vestigial or single sideband...34: Vestigial Sideband -- Duobinary 3600 4-Phase + AM 4800" 4-Phase + AM * Vestigial Sideband 8-PSK 7200 Phase and Amplitude Modulation 9600 Phase anl
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Brown, H Carolyn Peach; Smit, Barry; Somorin, Olufunso A; Sonwa, Denis J; Nkem, Johnson Ndi
2014-10-01
Tropical forests are vulnerable to climate-change representing a risk for indigenous peoples and forest-dependent communities. Mechanisms to conserve the forest, such as REDD+, could assist in the mitigation of climate change, reduce vulnerability, and enable people to adapt. Ninety-eight interviews were conducted in three countries containing the Congo Basin forest, Cameroon, CAR, and DRC, to investigate perceptions of decision-makers within, and responses of the institutions of the state, private sector, and civil society to the challenges of climate change. Results indicate that while decision-makers' awareness of climate change is high, direct institutional action is at an early stage. Adaptive capacity is currently low, but it could be enhanced with further development of institutional linkages and increased coordination of multilevel responses across all institutions and with local people. It is important to build networks with forest-dependent stakeholders at the local level, who can contribute knowledge that will build overall institutional adaptive capacity.
Language Model Combination and Adaptation Using Weighted Finite State Transducers
NASA Technical Reports Server (NTRS)
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
2010-01-01
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
Development of Omniphobic Desalination Membranes Using a Charged Electrospun Nanofiber Scaffold.
Lee, Jongho; Boo, Chanhee; Ryu, Won-Hee; Taylor, André D; Elimelech, Menachem
2016-05-04
In this study, we present a facile and scalable approach to fabricate omniphobic nanofiber membranes by constructing multilevel re-entrant structures with low surface energy. We first prepared positively charged nanofiber mats by electrospinning a blend polymer-surfactant solution of poly(vinylidene fluoride-co-hexafluoropropylene) (PVDF-HFP) and cationic surfactant (benzyltriethylammonium). Negatively charged silica nanoparticles (SiNPs) were grafted on the positively charged electrospun nanofibers via dip-coating to achieve multilevel re-entrant structures. Grafted SiNPs were then coated with fluoroalkylsilane to lower the surface energy of the membrane. The fabricated membrane showed excellent omniphobicity, as demonstrated by its wetting resistance to various low surface tension liquids, including ethanol with a surface tension of 22.1 mN/m. As a promising application, the prepared omniphobic membrane was tested in direct contact membrane distillation to extract water from highly saline feed solutions containing low surface tension substances, mimicking emerging industrial wastewaters (e.g., from shale gas production). While a control hydrophobic PVDF-HFP nanofiber membrane failed in the desalination/separation process due to low wetting resistance, our fabricated omniphobic membrane exhibited a stable desalination performance for 8 h of operation, successfully demonstrating clean water production from the low surface tension feedwater.
Rush, Jonathan; Hofer, Scott M
2014-06-01
The Positive and Negative Affect Schedule (PANAS) is a widely used measure of emotional experience. The factor structure of the PANAS has been examined predominantly with cross-sectional designs, which fails to disaggregate within-person variation from between-person differences. There is still uncertainty as to the factor structure of positive and negative affect and whether they constitute 2 distinct independent factors. The present study examined the within-person and between-person factor structure of the PANAS in 2 independent samples that reported daily affect over 7 and 14 occasions, respectively. Results from multilevel confirmatory factor analyses revealed that a 2-factor structure at both the within-person and between-person levels, with correlated specific factors for overlapping items, provided good model fit. The best-fitting solution was one where within-person factors of positive and negative affect were inversely correlated, but between-person factors were independent. The structure was further validated through multilevel structural equation modeling examining the effects of cognitive interference, daily stress, physical symptoms, and physical activity on positive and negative affect factors.
Locomotor function after long-duration space flight: effects and motor learning during recovery.
Mulavara, Ajitkumar P; Feiveson, Alan H; Fiedler, James; Cohen, Helen; Peters, Brian T; Miller, Chris; Brady, Rachel; Bloomberg, Jacob J
2010-05-01
Astronauts returning from space flight and performing Earth-bound activities must rapidly transition from the microgravity-adapted sensorimotor state to that of Earth's gravity. The goal of the current study was to assess locomotor dysfunction and recovery of function after long-duration space flight using a test of functional mobility. Eighteen International Space Station crewmembers experiencing an average flight duration of 185 days performed the functional mobility test (FMT) pre-flight and post-flight. To perform the FMT, subjects walked at a self selected pace through an obstacle course consisting of several pylons and obstacles set up on a base of 10-cm-thick, medium-density foam for a total of six trials per test session. The primary outcome measure was the time to complete the course (TCC, in seconds). To assess the long-term recovery trend of locomotor function after return from space flight, a multilevel exponential recovery model was fitted to the log-transformed TCC data. All crewmembers exhibited altered locomotor function after space flight, with a median 48% increase in the TCC. From the fitted model we calculated that a typical subject would recover to 95% of his/her pre-flight level at approximately 15 days post-flight. In addition, to assess the early motor learning responses after returning from space flight, we modeled performance over the six trials during the first post-flight session by a similar multilevel exponential relation. We found a significant positive correlation between measures of long-term recovery and early motor learning (P < 0.001) obtained from the respective models. We concluded that two types of recovery processes influence an astronaut's ability to re-adapt to Earth's gravity environment. Early motor learning helps astronauts make rapid modifications in their motor control strategies during the first hours after landing. Further, this early motor learning appears to reinforce the adaptive realignment, facilitating re-adaptation to Earth's 1-g environment on return from space flight.
Green, Amy E; Dishop, Christopher R; Aarons, Gregory A
2016-10-01
Community mental health providers often operate within stressful work environments and are at high risk of emotional exhaustion, which can negatively affect job performance and client satisfaction with services. This cross-sectional study examined the relationships between organizational stress, provider adaptability, and organizational commitment. Variables were analyzed with moderated multilevel regression in a sample of 311 mental health providers from 49 community mental health programs. Stressful organizational climate, characterized by high levels of emotional exhaustion, role conflict, and role overload, was negatively related to organizational commitment. Organizational stress moderated the relationship between provider adaptability and organizational commitment, such that those who were more adaptable had greater levels of organizational commitment when organizational stress was low but were less committed than those who were less adaptable when organizational stress was high. Providers higher in adaptability may perceive their organization as a greater fit when the work environment is less stressful; however, highly adaptable providers may also exercise choice that manifests in lower commitment to staying in an overly stressful work environment. Service systems and organizational contexts are becoming increasingly demanding and stressful for direct mental health service providers. Therefore, community mental health organizations should assess and understand their organizational climate and intervene with empirically based organizational strategies when necessary to reduce stressful climates and maintain adaptable employees.
NASA Astrophysics Data System (ADS)
Vidal, J.-P.; Martin, E.; Kitova, N.; Najac, J.; Soubeyroux, J.-M.
2012-04-01
Drought events develop in both space and time and they are therefore best described through summary joint spatio-temporal characteristics, like mean duration, mean affected area and total magnitude. This study addresses the issue of future projections of such characteristics of drought events over France through three main research questions: (1) Are downscaled climate projections able to reproduce spatio-temporal characteristics of meteorological and agricultural droughts in France over a present-day period? (2) How such characteristics will evolve over the 21st century under different emissions scenarios? (3) How would perceived drought characteristics evolve under theoretical adaptation scenarios? These questions are addressed using the Isba land surface model, downscaled climate projections from the ARPEGE General Circulation Model under three emissions scenarios, as well as results from a previously performed 50-year multilevel and multiscale drought reanalysis over France (Vidal et al., 2010). Spatio-temporal characteristics of meteorological and agricultural drought events are computed using the Standardized Precipitation Index (SPI) and the Standardized Soil Wetness Index (SSWI), respectively, and for time scales of 3 and 12 months. Results first show that the distributions of joint spatio-temporal characteristics of observed events are well reproduced by the downscaled hydroclimate projections over a present-day period. All spatio-temporal characteristics of drought events are then found to dramatically increase over the 21st century under all considered emissions scenarios, with stronger changes for agricultural droughts. Two theoretical adaptation scenarios are eventually built based on hypotheses of adaptation to evolving climate and hydrological normals. The two scenarios differ by the way the transient adaptation is performed for a given date in the future, with reference to the normals over either the previous 30-year window ("retrospective" adaptation) or over a 30-year period centred around the date considered ("prospective" adaptation). These adaptation scenarios are translated into local-scale transient drought thresholds, as opposed to a non-adaptation scenario where the drought threshold remains constant. The perceived spatio-temporal characteristics derived from the theoretical adaptation scenarios show much reduced changes, but they call for more realistic scenarios at both the catchment and national scale in order to accurately assess the combined effect of local-scale adaptation and global-scale mitigation. This study thus proposes a proof of concept for using standardized drought indices for (1) assessing projections of spatio-temporal drought characteristics and (2) building theoretical adaptation scenarios and associated perceived changes in hydrological impact studies (Vidal et al., submitted). Vidal J.-P., Martin E., Franchistéguy L., Habets F., Soubeyroux J.-M., Blanchard M. & Baillon M. (2010) Multilevel and multiscale drought reanalysis over France with the Safran-Isba-Modcou hydrometeorological suite. Hydrology and Earth System Sciences, 14, 459-478.doi: 10.5194/hess-14-459-2010 Vidal J.-P., Martin E., Kitova N., Najac J. & Soubeyroux, J. M. (submitted) Evolution of spatio-temporal drought characteristics: validation, projections and effect of adaptation scenarios. Submitted to Hydrology and earth System Sciences
Hierarchical Poly Tree Configurations for the Solution of Dynamically Refined Finte Element Models
NASA Technical Reports Server (NTRS)
Gute, G. D.; Padovan, J.
1993-01-01
This paper demonstrates how a multilevel substructuring technique, called the Hierarchical Poly Tree (HPT), can be used to integrate a localized mesh refinement into the original finite element model more efficiently. The optimal HPT configurations for solving isoparametrically square h-, p-, and hp-extensions on single and multiprocessor computers is derived. In addition, the reduced number of stiffness matrix elements that must be stored when employing this type of solution strategy is quantified. Moreover, the HPT inherently provides localize 'error-trapping' and a logical, efficient means with which to isolate physically anomalous and analytically singular behavior.
Cho, Yeoungjee; Johnson, David W.; Vesey, David A.; Hawley, Carmel M.; Pascoe, Elaine M.; Clarke, Margaret; Topley, Nicholas
2016-01-01
♦ Background: Peritoneal dialysis (PD) patients develop progressive and cumulative peritoneal injury with longer time spent on PD. The present study aimed to a) describe the trend of peritoneal injury biomarkers, matrix metalloproteinase-2 (MMP-2) and tissue inhibitor of metalloproteinase-1 (TIMP-1), in incident PD patients, b) to explore the capacity of dialysate MMP-2 to predict peritoneal solute transport rate (PSTR) and peritonitis, and c) to evaluate the influence of neutral pH, low glucose degradation product (GDP) PD solution on these outcomes. ♦ Methods: The study included 178 participants from the balANZ trial who had at least 1 stored dialysate sample. Changes in PSTR and peritonitis were primary outcome measures, and the utility of MMP-2 in predicting these outcomes was analyzed using multilevel linear regression and multilevel Poisson regression, respectively. ♦ Results: Significant linear increases in dialysate MMP-2 and TIMP-1 concentrations were observed (p < 0.001), but neither was affected by the type of PD solutions received (MMP-2: p = 0.07; TIMP-1: p = 0.63). An increase in PSTR from baseline was associated with higher levels of MMP-2 (p = 0.02), and the use of standard solutions over longer PD duration (p = 0.001). The risk of peritonitis was independently predicted by higher dialysate MMP-2 levels (incidence rate ratio [IRR] per ng/mL 1.01, 95% confidence interval [CI] 1.005 – 1.02, p = 0.002) and use of standard solutions (Biocompatible solution: IRR 0.45, 95% CI 0.24 – 0.85, p = 0.01). ♦ Conclusion: Dialysate MMP-2 and TIMP-1 concentrations increased with longer PD duration. Higher MMP-2 levels were associated with faster PSTR and future peritonitis risk. Administration of biocompatible solutions exerted no significant effect on dialysate levels of MMP-2 or TIMP-1, but did counteract the increase in PSTR and the risk of peritonitis associated with the use of standard PD solutions. This is the first longitudinal study to examine the clinical utility of MMP-2 as a predictor of patient-level outcomes. PMID:25292407
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Schulz, Amy J.; Israel, Barbara A.; Coombe, Chris M.; Gaines, Causandra; Reyes, Angela G.; Rowe, Zachary; Sand, Sharon; Strong, Larkin L.; Weir, Sheryl
2010-01-01
The elimination of persistent health inequities requires the engagement of multiple perspectives, resources and skills. Community-based participatory research is one approach to developing action strategies that promote health equity by addressing contextual as well as individual level factors, and that can contribute to addressing more fundamental factors linked to health inequity. Yet many questions remain about how to implement participatory processes that engage local insights and expertise, are informed by the existing public health knowledge base, and build support across multiple sectors to implement solutions. We describe a CBPR approach used to conduct a community assessment and action planning process, culminating in development of a multilevel intervention to address inequalities in cardiovascular disease in Detroit, Michigan. We consider implications for future efforts to engage communities in developing strategies toward eliminating health inequities. PMID:21873580
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
NASA Astrophysics Data System (ADS)
Fan, Hong; Li, Huan
2015-12-01
Location-related data are playing an increasingly irreplaceable role in business, government and scientific research. At the same time, the amount and types of data are rapidly increasing. It is a challenge how to quickly find required information from this rapidly growing volume of data, as well as how to efficiently provide different levels of geospatial data to users. This paper puts forward a data-oriented access model for geographic information science data. First, we analyze the features of GIS data including traditional types such as vector and raster data and new types such as Volunteered Geographic Information (VGI). Taking into account these analyses, a classification scheme for geographic data is proposed and TRAFIE is introduced to describe the establishment of a multi-level model for geographic data. Based on this model, a multi-level, scalable access system for geospatial information is put forward. Users can select different levels of data according to their concrete application needs. Pull-based and push-based data access mechanisms based on this model are presented. A Service Oriented Architecture (SOA) was chosen for the data processing. The model of this study has been described by providing decision-making process of government departments with a simulation of fire disaster data collection. The use case shows this data model and the data provision system is flexible and has good adaptability.
Qiao, Shan; Li, Xiaoming; Zhao, Guoxiang; Zhao, Junfeng; Stanton, Bonita
2014-07-01
To delineate the trajectories of loneliness and self-esteem over time among children affected by parental HIV and AIDS, and to examine how their perceived social support (PSS) influenced initial scores and change rates of these two psychological outcomes. We collected longitudinal data from children affected by parental HIV/AIDS in rural central China. Children 6-18 years of age at baseline were eligible to participate in the study and were assessed annually for 3 years. Multilevel regression models for change were used to assess the effect of baseline PSS on the trajectories of loneliness and self-esteem over time. We employed maximum likelihood estimates to fit multilevel models and specified the between-individual covariance matrix as 'unstructured' to allow correlation among the different sources of variance. Statistics including -2 Log Likelihood, Akaike Information Criterion and Bayesian Information Criterion were used in evaluating the model fit. The results of multilevel analyses indicated that loneliness scores significantly declined over time. Controlling for demographic characteristics, children with higher PSS reported significantly lower baseline loneliness score and experienced a slower rate of decline in loneliness over time. Children with higher PSS were more likely to report higher self-esteem scores at baseline. However, the self-esteem scores remained stable over time controlling for baseline PSS and all the other variables. The positive effect of PSS on psychological adjustment may imply a promising approach for future intervention among children affected by HIV/AIDS, in which efforts to promote psychosocial well being could focus on children and families with lower social support. We also call for a greater understanding of children's psychological adjustment process in various contexts of social support and appropriate adaptations of evidence-based interventions to meet their diverse needs.
Outsourcing neural active control to passive composite mechanics: a tissue engineered cyborg ray
NASA Astrophysics Data System (ADS)
Gazzola, Mattia; Park, Sung Jin; Park, Kyung Soo; Park, Shirley; di Santo, Valentina; Deisseroth, Karl; Lauder, George V.; Mahadevan, L.; Parker, Kevin Kit
2016-11-01
Translating the blueprint that stingrays and skates provide, we create a cyborg swimming ray capable of orchestrating adaptive maneuvering and phototactic navigation. The impossibility of replicating the neural system of batoids fish is bypassed by outsourcing algorithmic functionalities to the body composite mechanics, hence casting the active control problem into a design, passive one. We present a first step in engineering multilevel "brain-body-flow" systems that couple sensory information to motor coordination and movement, leading to behavior. This work paves the way for the development of autonomous and adaptive artificial creatures able to process multiple sensory inputs and produce complex behaviors in distributed systems and may represent a path toward soft-robotic "embodied cognition".
Corbie-Smith, Giselle; Akers, Aletha; Blumenthal, Connie; Council, Barbara; Wynn, Mysha; Muhammad, Melvin; Stith, Doris
2011-01-01
Southeastern states are among the hardest hit by the HIV epidemic in this country, and racial disparities in HIV rates are high in this region. This is particularly true in our communities of interest in rural eastern North Carolina. Although most recent efforts to prevent HIV attempt to address multiple contributing factors, we have found few multilevel HIV interventions that have been developed, tailored or tested in rural communities for African Americans. We describe how Project GRACE integrated Intervention Mapping (IM) methodology with community based participatory research (CBPR) principles to develop a multi-level, multi-generational HIV prevention intervention. IM was carried out in a series of steps from review of relevant data through producing program components. Through the IM process, all collaborators agreed that we needed a family-based intervention involving youth and their caregivers. We found that the structured approach of IM can be adapted to incorporate the principles of CBPR. PMID:20528128
The Role of Individual and Collective Mindfulness in Promoting Occupational Safety in Health Care.
Dierynck, Bart; Leroy, Hannes; Savage, Grant T; Choi, Ellen
2017-02-01
Although the importance of safety regulations is highly emphasized in hospitals, nurses frequently work around, or intentionally bypass, safety regulations. We argue that work-arounds occur because adhering to safety regulations usually requires more time and work process design often lacks complementarity with safety regulations. Our main proposition is that mindfulness is associated with a decrease in occupational safety failures through a decrease in work-arounds. First, we propose that individual mindfulness may prevent the depletion of motivational resources caused by worrying about the consequences of time lost when adhering to safety regulations. Second, we argue that collective mindfulness may provide nursing teams with a cognitive infrastructure that facilitates the detection and adaptation of work processes. The results of a multilevel analysis of 580 survey responses from nurses are consistent with our propositions. Our multilevel analytic approach enables us to account for the unique variance in work-arounds that individual and collective mindfulness explain.
The JOINT model of nurse absenteeism and turnover: a systematic review.
Daouk-Öyry, Lina; Anouze, Abdel-Latef; Otaki, Farah; Dumit, Nuhad Yazbik; Osman, Ibrahim
2014-01-01
Absenteeism and turnover among healthcare workers have a significant impact on overall healthcare system performance. The literature captures variables from different levels of measurement and analysis as being associated with attendance behavior among nurses. Yet, it remains unclear how variables from different contextual levels interact to impact nurses' attendance behaviors. The purpose of this review is to develop an integrative multilevel framework that optimizes our understanding of absenteeism and turnover among nurses in hospital settings. We therefore systematically examine English-only studies retrieved from two major databases, PubMed and CINAHL Plus and published between January, 2007 and January, 2013 (inclusive). Our review led to the identification of 7619 articles out of which 41 matched the inclusion criteria. The analysis yielded a total of 91 antecedent variables and 12 outcome variables for turnover, and 29 antecedent variables and 9 outcome variables for absenteeism. The various manifested variables were analyzed using content analysis and grouped into 11 categories, and further into five main factors: Job, Organization, Individual, National and inTerpersonal (JOINT). Thus, we propose the JOINT multilevel conceptual model for investigating absenteeism and turnover among nurses. The JOINT model can be adapted by researchers for fitting their hypothesized multilevel relationships. It can also be used by nursing managers as a lens for holistically managing nurses' attendance behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.
Community, culture and sustainability in multilevel dynamic systems intervention science.
Schensul, Jean J
2009-06-01
This paper addresses intertwined issues in the conceptualization, implementation and evaluation of multilevel dynamic systems intervention science (MDSIS). Interventions are systematically planned, conducted and evaluated social science-based cultural products intercepting the lives of people and institutions in the context of multiple additional events and processes (which also may be referred to as interventions) that may speed, slow or reduce change towards a desired outcome. Multilevel interventions address change efforts at multiple social levels in the hope that effects at each level will forge synergistic links, facilitating movement toward desired change. This paper utilizes an ecological framework that identifies macro (policy and regulatory institutions), meso (organizations and agencies with resources, and power) and micro (individuals, families and friends living in communities) interacting directly and indirectly. An MDSIS approach hypothesizes that change toward a goal will occur faster and more effectively when synchronized and supported across levels in a social system. MDSIS approaches by definition involve "whole" communities and cannot be implemented without the establishments of working community partnerships This paper takes a dynamic systems approach to science as conducted in communities, and discusses four concepts that are central to MDSIS--science, community, culture, and sustainability. These concepts are important in community based participatory research and to the targeting, refinement, and adaptation of enduring interventions. Consistency in their meaning and use can promote forward movement in the field of MDSIS, and in community-based prevention science.
Gittelsohn, Joel; Trude, Angela C; Poirier, Lisa; Ross, Alexandra; Ruggiero, Cara; Schwendler, Teresa; Anderson Steeves, Elizabeth
2017-11-10
The multifactorial causes of obesity require multilevel and multicomponent solutions, but such combined strategies have not been tested to improve the community food environment. We evaluated the impact of a multilevel (operating at different levels of the food environment) multicomponent (interventions occurring at the same level) community intervention. The B'more Healthy Communities for Kids (BHCK) intervention worked at the wholesaler ( n = 3), corner store ( n = 50), carryout ( n = 30), recreation center ( n = 28), household ( n = 365) levels to improve availability, purchasing, and consumption of healthier foods and beverages (low-sugar, low-fat) in low-income food desert predominantly African American zones in the city of Baltimore (MD, USA), ultimately intending to lead to decreased weight gain in children (not reported in this manuscript). For this paper, we focus on more proximal impacts on the food environment, and measure change in stocking, sales and purchase of promoted foods at the different levels of the food system in 14 intervention neighborhoods, as compared to 14 comparison neighborhoods. Sales of promoted products increased in wholesalers. Stocking of these products improved in corner stores, but not in carryouts, and we did not find any change in total sales. Children more exposed to the intervention increased their frequency of purchase of promoted products, although improvement was not seen for adult caregivers. A multilevel food environment intervention in a low-income urban setting improved aspects of the food system, leading to increased healthy food purchasing behavior in children.
Gittelsohn, Joel; Trude, Angela C.; Poirier, Lisa; Ross, Alexandra; Ruggiero, Cara; Schwendler, Teresa; Anderson Steeves, Elizabeth
2017-01-01
The multifactorial causes of obesity require multilevel and multicomponent solutions, but such combined strategies have not been tested to improve the community food environment. We evaluated the impact of a multilevel (operating at different levels of the food environment) multicomponent (interventions occurring at the same level) community intervention. The B’more Healthy Communities for Kids (BHCK) intervention worked at the wholesaler (n = 3), corner store (n = 50), carryout (n = 30), recreation center (n = 28), household (n = 365) levels to improve availability, purchasing, and consumption of healthier foods and beverages (low-sugar, low-fat) in low-income food desert predominantly African American zones in the city of Baltimore (MD, USA), ultimately intending to lead to decreased weight gain in children (not reported in this manuscript). For this paper, we focus on more proximal impacts on the food environment, and measure change in stocking, sales and purchase of promoted foods at the different levels of the food system in 14 intervention neighborhoods, as compared to 14 comparison neighborhoods. Sales of promoted products increased in wholesalers. Stocking of these products improved in corner stores, but not in carryouts, and we did not find any change in total sales. Children more exposed to the intervention increased their frequency of purchase of promoted products, although improvement was not seen for adult caregivers. A multilevel food environment intervention in a low-income urban setting improved aspects of the food system, leading to increased healthy food purchasing behavior in children. PMID:29125558
NASA Technical Reports Server (NTRS)
Nakamura, S.
1983-01-01
The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
The use of solution adaptive grids in solving partial differential equations
NASA Technical Reports Server (NTRS)
Anderson, D. A.; Rai, M. M.
1982-01-01
The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Green, Amy E.; Dishop, Christopher; Aarons, Gregory A
2016-01-01
Objective Community mental health providers often operate within stressful work environments and are at high risk for emotional exhaustion, which can negatively affect job performance and client satisfaction with services. This cross-sectional study examines the relationships between organizational stress, provider adaptability, and organizational commitment. Methods Variables were analyzed using moderated multi-level regression in a sample of 311 mental health providers from 49 community mental health programs. Results Stressful organizational climate, characterized by high levels of emotional exhaustion, role conflict, and role overload, was negatively related to organizational commitment. Organizational stress moderated the relationship between provider adaptability and organizational commitment, such that those who were more adaptable had greater levels of organizational commitment when organizational stress was low, but were less committed than those who were less adaptable when organizational stress was high. Conclusions In the current study, providers higher in adaptability may perceive their organization as a greater fit when characterized by lower levels of stressfulness; however, highly adaptable providers may also exercise choice that manifests in lower commitment to staying in an overly stressful work environment. Service systems and organizational contexts are becoming increasingly demanding and stressful for direct mental health service providers. Therefore, community mental health organizations should assess and understand their organizational climate and intervene with empirically based organizational strategies when necessary to reduce stressful climates and maintain desirable employees. PMID:27301760
Environmental variability and acoustic signals: a multi-level approach in songbirds.
Medina, Iliana; Francis, Clinton D
2012-12-23
Among songbirds, growing evidence suggests that acoustic adaptation of song traits occurs in response to habitat features. Despite extensive study, most research supporting acoustic adaptation has only considered acoustic traits averaged for species or populations, overlooking intraindividual variation of song traits, which may facilitate effective communication in heterogeneous and variable environments. Fewer studies have explicitly incorporated sexual selection, which, if strong, may favour variation across environments. Here, we evaluate the prevalence of acoustic adaptation among 44 species of songbirds by determining how environmental variability and sexual selection intensity are associated with song variability (intraindividual and intraspecific) and short-term song complexity. We show that variability in precipitation can explain short-term song complexity among taxonomically diverse songbirds, and that precipitation seasonality and the intensity of sexual selection are related to intraindividual song variation. Our results link song complexity to environmental variability, something previously found for mockingbirds (Family Mimidae). Perhaps more importantly, our results illustrate that individual variation in song traits may be shaped by both environmental variability and strength of sexual selection.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self Efficacy in Depression: Bridging the Gap Between Competence and Real World Functioning.
Milanovic, Melissa; Ayukawa, Emma; Usyatynsky, Aleksandra; Holshausen, Katherine; Bowie, Christopher R
2018-05-01
We investigated the discrepancy between competence and real-world performance in major depressive disorder (MDD) for adaptive and interpersonal behaviors, determining whether self-efficacy significantly predicts this discrepancy, after considering depressive symptoms. Forty-two participants (Mage = 37.64, 66.67% female) with MDD were recruited from mental health clinics. Competence, self-efficacy, and real-world functioning were evaluated in adaptive and interpersonal domains; depressive symptoms were assessed with the Beck Depression Inventory II. Hierarchical regression analysis identified predictors of functional disability and the discrepancy between competence and real-world functioning. Self-efficacy significantly predicted functioning in the adaptive and interpersonal domains over and above depressive symptoms. Interpersonal self-efficacy accounted for significant variance in the discrepancy between interpersonal competence and functioning beyond symptoms. Using a multilevel, multidimensional approach, we provide the first data regarding relationships among competence, functioning, and self-efficacy in MDD. Self-efficacy plays an important role in deployment of functional skills in everyday life for individuals with MDD.
Kislov, Roman; Waterman, Heather; Harvey, Gill; Boaden, Ruth
2014-11-15
Knowledge mobilisation in healthcare organisations is often carried out through relatively short-term projects dependent on limited funding, which raises concerns about the long-term sustainability of implementation and improvement. It is becoming increasingly recognised that the translation of research evidence into practice has to be supported by developing the internal capacity of healthcare organisations to engage with and apply research. This process can be supported by external knowledge mobilisation initiatives represented, for instance, by professional associations, collaborative research partnerships and implementation networks. This conceptual paper uses empirical and theoretical literature on organisational learning and dynamic capabilities to enhance our understanding of intentional capacity building for knowledge mobilisation in healthcare organisations. The discussion is structured around the following three themes: (1) defining and classifying capacity building for knowledge mobilisation; (2) mechanisms of capability development in organisational context; and (3) individual, group and organisational levels of capability development. Capacity building is presented as a practice-based process of developing multiple skills, or capabilities, belonging to different knowledge domains and levels of complexity. It requires an integration of acquisitive learning, through which healthcare organisations acquire knowledge and skills from knowledge mobilisation experts, and experience-based learning, through which healthcare organisations adapt, absorb and modify their knowledge and capabilities through repeated practice. Although the starting point for capability development may be individual-, team- or organisation-centred, facilitation of the transitions between individual, group and organisational levels of learning within healthcare organisations will be needed. Any initiative designed to build capacity for knowledge mobilisation should consider the subsequent trajectory of newly developed knowledge and skills within the recipient healthcare organisations. The analysis leads to four principles underpinning a practice-based approach to developing multilevel knowledge mobilisation capabilities: (1) moving from 'building' capacity from scratch towards 'developing' capacity of healthcare organisations; (2) moving from passive involvement in formal education and training towards active, continuous participation in knowledge mobilisation practices; (3) moving from lower-order, project-specific capabilities towards higher-order, generic capabilities allowing healthcare organisations to adapt to change, absorb new knowledge and innovate; and (4) moving from single-level to multilevel capability development involving transitions between individual, group and organisational learning.
Nagler, Rebekah H; Bigman, Cabral A; Ramanadhan, Shoba; Ramamurthi, Divya; Viswanath, K
2016-04-01
Americans remain under-informed about cancer and other health disparities and the social determinants of health (SDH). The news media may be contributing to this knowledge deficit, whether by discussing these issues narrowly or ignoring them altogether. Because local media are particularly important in influencing public opinion and support for public policies, this study examines the prevalence and framing of disparities/SDH in local mainstream and ethnic print news. We conducted a multi-method content analysis of local mainstream (English language) and ethnic (Spanish language) print news in two lower income cities in New England with substantial racial/ethnic minority populations. After establishing intercoder reliability (κ = 0.63-0.88), coders reviewed the primary English and Spanish language newspaper in each city, identifying both disparities and non-disparities health stories published between February 2010 and January 2011. Local print news coverage of cancer and other health disparities was rare. Of 650 health stories published across four newspapers during the one-year study period, only 21 (3.2%) discussed disparities/SDH. Although some stories identified causes of and solutions for disparities, these were often framed in individual (e.g., poor dietary habits) rather than social contextual terms (e.g., lack of food availability/affordability). Cancer and other health stories routinely missed opportunities to discuss disparities/SDH. Local mainstream and ethnic media may be ideal targets for multilevel interventions designed to address cancer and other health inequalities. By increasing media attention to and framing of health disparities, we may observe important downstream effects on public opinion and support for structural solutions to disparities, particularly at the local level. Cancer Epidemiol Biomarkers Prev; 25(4); 603-12. ©2016 AACR SEE ALL ARTICLES IN THIS CEBP FOCUS SECTION, "MULTILEVEL APPROACHES TO ADDRESSING CANCER HEALTH DISPARITIES". ©2016 American Association for Cancer Research.
A Method of Character Detection and Segmentation for Highway Guide Signs
NASA Astrophysics Data System (ADS)
Xu, Jiawei; Zhang, Chongyang
2018-01-01
In this paper, a method of character detection and segmentation for highway signs in China is proposed. It consists of four steps. Firstly, the highway sign area is detectedby colour and geometric features, andthe possible character region is obtained by multi-level projection strategy. Secondly, pseudo target character region is removed by local binary patterns (LBP) feature. Thirdly, convolutional neural network (CNN)is used to classify target regions. Finally, adaptive projection strategies are used to segment characters strings. Experimental results indicate that the proposed method achieves new state-of-the-art results.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.
Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2003-01-01
An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers
NASA Astrophysics Data System (ADS)
Gorbunov, M. S.; Antonov, A. A.
2017-01-01
It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.
Huai, Jianjun
2016-09-27
Although the integrated indicator methods have become popular for assessing vulnerability to climate change, their proliferation has introduced a confusing array of scales and indicators that cause a science-policy gap. I argue for a clear adaptation pathway in an "integrative typology" of regional vulnerability that matches appropriate scales, optimal measurements and adaptive strategies in a six-dimensional and multi-level analysis framework of integration and typology inspired by the "5W1H" questions: "Who is concerned about how to adapt to the vulnerability of what to what in some place (where) at some time (when)?" Using the case of the vulnerability of wheat, barley and oats to drought in Australian wheat sheep zones during 1978-1999, I answer the "5W1H" questions through establishing the "six typologies" framework. I then optimize the measurement of vulnerability through contrasting twelve kinds of vulnerability scores with the divergence of crops yields from their regional mean. Through identifying the socioeconomic constraints, I propose seven generic types of crop-drought vulnerability and local adaptive strategy. Our results illustrate that the process of assessing vulnerability and selecting adaptations can be enhanced using a combination of integration, optimization and typology, which emphasize dynamic transitions and transformations between integration and typology.
Adaptive governance, ecosystem management, and natural capital.
Schultz, Lisen; Folke, Carl; Österblom, Henrik; Olsson, Per
2015-06-16
To gain insights into the effects of adaptive governance on natural capital, we compare three well-studied initiatives; a landscape in Southern Sweden, the Great Barrier Reef in Australia, and fisheries in the Southern Ocean. We assess changes in natural capital and ecosystem services related to these social-ecological governance approaches to ecosystem management and investigate their capacity to respond to change and new challenges. The adaptive governance initiatives are compared with other efforts aimed at conservation and sustainable use of natural capital: Natura 2000 in Europe, lobster fisheries in the Gulf of Maine, North America, and fisheries in Europe. In contrast to these efforts, we found that the adaptive governance cases developed capacity to perform ecosystem management, manage multiple ecosystem services, and monitor, communicate, and respond to ecosystem-wide changes at landscape and seascape levels with visible effects on natural capital. They enabled actors to collaborate across diverse interests, sectors, and institutional arrangements and detect opportunities and problems as they developed while nurturing adaptive capacity to deal with them. They all spanned local to international levels of decision making, thus representing multilevel governance systems for managing natural capital. As with any governance system, internal changes and external drivers of global impacts and demands will continue to challenge the long-term success of such initiatives.
Adaptive governance, ecosystem management, and natural capital
Schultz, Lisen; Folke, Carl; Österblom, Henrik; Olsson, Per
2015-01-01
To gain insights into the effects of adaptive governance on natural capital, we compare three well-studied initiatives; a landscape in Southern Sweden, the Great Barrier Reef in Australia, and fisheries in the Southern Ocean. We assess changes in natural capital and ecosystem services related to these social–ecological governance approaches to ecosystem management and investigate their capacity to respond to change and new challenges. The adaptive governance initiatives are compared with other efforts aimed at conservation and sustainable use of natural capital: Natura 2000 in Europe, lobster fisheries in the Gulf of Maine, North America, and fisheries in Europe. In contrast to these efforts, we found that the adaptive governance cases developed capacity to perform ecosystem management, manage multiple ecosystem services, and monitor, communicate, and respond to ecosystem-wide changes at landscape and seascape levels with visible effects on natural capital. They enabled actors to collaborate across diverse interests, sectors, and institutional arrangements and detect opportunities and problems as they developed while nurturing adaptive capacity to deal with them. They all spanned local to international levels of decision making, thus representing multilevel governance systems for managing natural capital. As with any governance system, internal changes and external drivers of global impacts and demands will continue to challenge the long-term success of such initiatives. PMID:26082542
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Chacon, Luis
2005-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. A crucial element is the development of an effective multilevel treatment of the grid equation.ootnotetextL. Chac'on, G. Lapenta, A fully implicit, nonlinear adaptive grid strategy, J. Comput. Phys., accepted (2005) We will show that such an approach is competitive vs. uniform grids both from the accuracy (due to adaptivity) and the efficiency standpoints. Results for a variety of models 1D and 2D geometries, including nonlinear diffusion, radiation-diffusion, Burgers equation, and gas dynamics will be presented.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
The Development and Assesment of Adaptation Pathways for Urban Pluvial Flooding
NASA Astrophysics Data System (ADS)
Babovic, F.; Mijic, A.; Madani, K.
2017-12-01
Around the globe, urban areas are growing in both size and importance. However, due to the prevalence of impermeable surfaces within the urban fabric of cities these areas have a high risk of pluvial flooding. Due to the convergence of population growth and climate change the risk of pluvial flooding is growing. When designing solutions and adaptations to pluvial flood risk urban planners and engineers encounter a great deal of uncertainty due to model uncertainty, uncertainty within the data utilised, and uncertainty related to future climate and land use conditions. The interaction of these uncertainties leads to conditions of deep uncertainty. However, infrastructure systems must be designed and built in the face of this deep uncertainty. An Adaptation Tipping Points (ATP) methodology was used to develop a strategy to adapt an urban drainage system in the North East of London under conditions of deep uncertainty. The ATP approach was used to assess the current drainage system and potential drainage system adaptations. These adaptations were assessed against potential changes in rainfall depth and peakedness-defined as the ratio of mean to peak rainfall. These solutions encompassed both traditional and blue-green solutions that the Local Authority are known to be considering. This resulted in a set of Adaptation Pathways. However, theses pathways do not convey any information regarding the relative merits and demerits of the potential adaptation options presented. To address this a cost-benefit metric was developed that would reflect the solutions' costs and benefits under uncertainty. The resulting metric combines elements of the Benefits of SuDS Tool (BeST) with real options analysis in order to reflect the potential value of ecosystem services delivered by blue-green solutions under uncertainty. Lastly, it is discussed how a local body can utilise the adaptation pathways; their relative costs and benefits; and a system of local data collection to help guide better decision making with respect to urban flood adaptation.
Design of QoS-Aware Multi-Level MAC-Layer for Wireless Body Area Network.
Hu, Long; Zhang, Yin; Feng, Dakui; Hassan, Mohammad Mehedi; Alelaiwi, Abdulhameed; Alamri, Atif
2015-12-01
With the advances in wearable computing and various wireless technologies, there is an increasing trend to outsource body signals from wireless body area network (WBAN) to outside world including cyber space, healthcare big data clouds, etc. Since the environmental and physiological data collected by multimodal sensors have different importance, the provisioning of quality of service (QoS) for the sensory data in WBAN is a critical issue. This paper proposes multiple level-based QoS design at WBAN media access control layer in terms of user level, data level and time level. In the proposed QoS provisioning scheme, different users have different priorities, various sensory data collected by different sensor nodes have different importance, while data priority for the same sensor node varies over time. The experimental results show that the proposed multi-level based QoS provisioning solution in WBAN yields better performance for meeting QoS requirements of personalized healthcare applications while achieving energy saving.
"Generality of mis-fit"? The real-life difficulty of matching scales in an interconnected world.
Keskitalo, E Carina H; Horstkotte, Tim; Kivinen, Sonja; Forbes, Bruce; Käyhkö, Jukka
2016-10-01
A clear understanding of processes at multiple scales and levels is of special significance when conceiving strategies for human-environment interactions. However, understanding and application of the scale concept often differ between administrative-political and ecological disciplines. These mirror major differences in potential solutions whether and how scales can, at all, be made congruent. As a result, opportunities of seeking "goodness-of-fit" between different concepts of governance should perhaps be reconsidered in the light of a potential "generality of mis-fit." This article reviews the interdisciplinary considerations inherent in the concept of scale in its ecological, as well as administrative-political, significance and argues that issues of how to manage "mis-fit" should be awarded more emphasis in social-ecological research and management practices. These considerations are exemplified by the case of reindeer husbandry in Fennoscandia. Whilst an indigenous small-scale practice, reindeer husbandry involves multi-level ecological and administrative-political complexities-complexities that we argue may arise in any multi-level system.
Displacement Based Multilevel Structural Optimization
NASA Technical Reports Server (NTRS)
Sobieszezanski-Sobieski, J.; Striz, A. G.
1996-01-01
In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.
NASA Technical Reports Server (NTRS)
Bates, J. R.; Moorthi, S.; Higgins, R. W.
1993-01-01
An adiabatic global multilevel primitive equation model using a two time-level, semi-Lagrangian semi-implicit finite-difference integration scheme is presented. A Lorenz grid is used for vertical discretization and a C grid for the horizontal discretization. The momentum equation is discretized in vector form, thus avoiding problems near the poles. The 3D model equations are reduced by a linear transformation to a set of 2D elliptic equations, whose solution is found by means of an efficient direct solver. The model (with minimal physics) is integrated for 10 days starting from an initialized state derived from real data. A resolution of 16 levels in the vertical is used, with various horizontal resolutions. The model is found to be stable and efficient, and to give realistic output fields. Integrations with time steps of 10 min, 30 min, and 1 h are compared, and the differences are found to be acceptable.
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
Multi-level human evolution: ecological patterns in hominin phylogeny.
Parravicini, Andrea; Pievani, Telmo
2016-06-20
Evolution is a process that occurs at many different levels, from genes to ecosystems. Genetic variations and ecological pressures are hence two sides of the same coin; but due both to fragmentary evidence and to the influence of a gene-centered and gradualistic approach to evolutionary phenomena, the field of paleoanthropology has been slow to take the role of macro-evolutionary patterns (i.e. ecological and biogeographical at large scale) seriously. However, several very recent findings in paleoanthropology stress both climate instability and ecological disturbance as key factors affecting the highly branching hominin phylogeny, from the earliest hominins to the appearance of cognitively modern humans. Allopatric speciation due to geographic displacement, turnover-pulses of species, adaptive radiation, mosaic evolution of traits in several coeval species, bursts of behavioral innovation, serial dispersals out of Africa, are just some of the macro-evolutionary patterns emerging from the field. The multilevel approach to evolution proposed by paleontologist Niles Eldredge is adopted here as interpretative tool, and has yielded a larger picture of human evolution that integrates different levels of evolutionary change, from local adaptations in limited ecological niches to dispersal phenotypes able to colonize an unprecedented range of ecosystems. Changes in global climate and Earth's surface most greatly affected human evolution. Precisely because it is cognitively hard for us to appreciate the long-term common destiny we share with the whole biosphere, it is particularly valuable to highlight the accumulating evidence that human evolution has been deeply affected by global ecological changes that transformed our African continent of origin.
Shearer, David A; Sparkes, William; Northeast, Jonny; Cunningham, Daniel J; Cook, Christian J; Kilduff, Liam P
2017-05-01
Biochemical (e.g. creatine kinase (CK)) and neuromuscular (e.g. peak power output (PPO)) markers of recovery are expensive and require specialist equipment. Perceptual measures are an effective alternative, yet most validated scales are too long for daily use. This study utilises a longitudinal multi-level design to test an adapted Brief Assessment of Mood (BAM+), with four extra items and a 100mm visual analogue scale to measure recovery. Elite under-21 academy soccer players (N=11) were monitored across five games with data (BAM+, CK and PPO) collected for each game at 24h pre, 24h and 48h post-match. Match activity data for each participant was also collected using GPS monitors on players. BAM+, CK and PPO had significant (p<.05) linear and quadratic growth curves across time and games that matched the known time reports of fatigue and recovery. Multi-level linear modelling (MLM) with random intercepts for 'participant' and 'game' indicated only CK significantly contributed to the variance of BAM+ scores (p<.05). Significant correlations (p<.01) were found between changes in BAM+ scores from baseline at 24 and 48h post-match for total distance covered per minute, high intensity distance covered per minute, and total number of sprints per minute. Visual and inferential results indicate that the BAM+ appears effective for monitoring longitudinal recovery cycles in elite level athletes. Future research is needed to confirm both the scales reliability and validity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2018-05-01
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
Physical Activity and Food Environments: Solutions to the Obesity Epidemic
Sallis, James F; Glanz, Karen
2009-01-01
Context: Environmental, policy, and societal changes are important contributors to the rapid rise in obesity over the past few decades, and there has been substantial progress toward identifying environmental and policy factors related to eating and physical activity that can point toward solutions. This article is a status report on research on physical activity and food environments, and it suggests how these findings can be used to improve diet and physical activity and to control or reduce obesity. Methods: This article summarizes and synthesizes recent reviews and provides examples of representative studies. It also describes ongoing innovative interventions and policy change efforts that were identified through conference presentations, media coverage, and websites. Findings: Numerous cross-sectional studies have consistently demonstrated that some attributes of built and food environments are associated with physical activity, healthful eating, and obesity. Residents of walkable neighborhoods who have good access to recreation facilities are more likely to be physically active and less likely to be overweight or obese. Residents of communities with ready access to healthy foods also tend to have more healthful diets. Disparities in environments and policies that disadvantage low-income communities and racial minorities have been documented as well. Evidence from multilevel studies, prospective research, and quasi-experimental evaluations of environmental changes are just beginning to emerge. Conclusions: Environment, policy, and multilevel strategies for improving diet, physical activity, and obesity control are recommended based on a rapidly growing body of research and the collective wisdom of leading expert organizations. A public health imperative to identify and implement solutions to the obesity epidemic warrants the use of the most promising strategies while continuing to build the evidence base. PMID:19298418
NASA Astrophysics Data System (ADS)
Sun, Yuan; Bhattacherjee, Anol
2011-11-01
Information technology (IT) usage within organisations is a multi-level phenomenon that is influenced by individual-level and organisational-level variables. Yet, current theories, such as the unified theory of acceptance and use of technology, describe IT usage as solely an individual-level phenomenon. This article postulates a model of organisational IT usage that integrates salient organisational-level variables such as user training, top management support and technical support within an individual-level model to postulate a multi-level model of IT usage. The multi-level model was then empirically validated using multi-level data collected from 128 end users and 26 managers in 26 firms in China regarding their use of enterprise resource planning systems and analysed using the multi-level structural equation modelling (MSEM) technique. We demonstrate the utility of MSEM analysis of multi-level data relative to the more common structural equation modelling analysis of single-level data and show how single-level data can be aggregated to approximate multi-level analysis when multi-level data collection is not possible. We hope that this article will motivate future scholars to employ multi-level data and multi-level analysis for understanding organisational phenomena that are truly multi-level in nature.
A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Ijabadeniyi, Oluwatosin Ademola; Mnyandu, Elizabeth
2017-04-13
The effectiveness of sodium dodecyl sulphate (SDS), sodium hypochlorite solution and levulinic acid in reducing the survival of heat adapted and chlorine adapted Listeria monocytogenes ATCC 7644 was evaluated. The results against heat adapted L. monocytognes revealed that sodium hypochlorite solution was the least effective, achieving log reduction of 2.75, 2.94 and 3.97 log colony forming unit (CFU)/mL for 1, 3 and 5 minutes, respectively. SDS was able to achieve 8 log reduction for both heat adapted and chlorine adapted bacteria. When used against chlorine adapted L. monocytogenes sodium hypochlorite solution achieved log reduction of 2.76, 2.93 and 3.65 log CFU/mL for 1, 3 and 5 minutes, respectively. Using levulinic acid on heat adapted bacteria achieved log reduction of 3.07, 2.78 and 4.97 log CFU/mL for 1, 3, 5 minutes, respectively. On chlorine adapted bacteria levulinic acid achieved log reduction of 2.77, 3.07 and 5.21 log CFU/mL for 1, 3 and 5 minutes, respectively. Using a mixture of 0.05% SDS and 0.5% levulinic acid on heat adapted bacteria achieved log reduction of 3.13, 3.32 and 4.79 log CFU/mL for 1, 3 and 5 minutes while on chlorine adapted bacteria it achieved 3.20, 3.33 and 5.66 log CFU/mL, respectively. Increasing contact time also increased log reduction for both test pathogens. A storage period of up to 72 hours resulted in progressive log reduction for both test pathogens. Results also revealed that there was a significant difference (P≤0.05) among contact times, storage times and sanitizers. Findings from this study can be used to select suitable sanitizers and contact times for heat and chlorine adapted L. monocytogenes in the fresh produce industry.
Asymmetric Multilevel Outphasing (AMO): A New Architecture for All-Silicon mm-Wave Transmitter ICs
2015-06-12
power-amplifiers for mobile basestation infrastructure and handsets. NanoSemi Inc. designs linearization solutions for analog front-ends such as...ward flexible, multi-standard radio chips, increases the need for high-precision, high-throughput and energy-efficient backend processing. The desire...peak PAE is affected by less than 1% (46 mW/(46 mW 1.8 W/0.4)) by this 64-QAM capable AMO SCS backend . 378 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 48
A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.
Gallardo, José E
2012-01-01
The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.
Parra-Cardona, J. Rubén; Bybee, Deborah; Sullivan, Cris M.; Domenech Rodríguez, Melanie M.; Dates, Brian; Tams, Lisa; Bernal, Guillermo
2016-01-01
Objective There is a dearth of empirical studies aimed at examining the impact of differential cultural adaptation of evidence-based clinical and prevention interventions. This prevention study consisted of a randomized controlled trial aimed at comparing the impact of two differentially culturally adapted versions of the evidence-based parenting intervention known as Parent Management Training, the Oregon Model (PMTOR). Method The sample consisted of 103 Latina/o immigrant families (190 individual parents). Each family was allocated to one of three conditions: (a) a culturally adapted PMTO (CA), (b) culturally adapted and enhanced PMTO (CE), and (c) a wait-list control. Measurements were implemented at baseline (T1), treatment completion (T2) and 6-month follow up (T3). Results Multi-level growth modeling analyses indicated statistically significant improvements on parenting skills for fathers and mothers (main effect) at 6-month follow-up in both adapted interventions, when compared to the control condition. With regards to parent-reported child behaviors, child internalizing behaviors were significantly lower for both parents in the CE intervention (main effect), compared with control at 6-month follow-up. No main effect was found for child externalizing behaviors. However, a Parent x Condition effect was found indicating a significant reduction of child externalizing behaviors for CE fathers compared to CA and control fathers at posttest and 6-month follow-up. Conclusion Present findings indicate the value of differential cultural adaptation research designs and the importance of examining effects for both mothers and fathers, particularly when culturally-focused and gender variables are considered for intervention design and implementation. PMID:28045288
Bowen, Kathryn J; Ebi, Kristie; Friel, Sharon; McMichael, Anthony J
2013-09-10
Addressing climate change and its associated effects is a multi-dimensional and ongoing challenge. This includes recognizing that climate change will affect the health and wellbeing of all populations over short and longer terms, albeit in varied ways and intensities. That recognition has drawn attention to the need to take adaptive actions to lessen adverse impacts over the next few decades from unavoidable climate change, particularly in developing country settings. A range of sectors is responsible for appropriate adaptive policies and measures to address the health risks of climate change, including health services, water and sanitation, trade, agriculture, disaster management, and development. To broaden the framing of governance and decision-making processes by using innovative methods and assessments to illustrate the multi-sectoral nature of health-related adaptation to climate change. This is a shift from sector-specific to multi-level systems encompassing sectors and actors, across temporal and spatial scales. A review and synthesis of the current knowledge in the areas of health and climate change adaptation governance and decision-making processes. A novel framework is presented that incorporates social science insights into the formulation and implementation of adaptation activities and policies to lessen the health risks posed by climate change. Clarification of the roles that different sectors, organizations, and individuals occupy in relation to the development of health-related adaptation strategies will facilitate the inclusion of health and wellbeing within multi-sector adaptation policies, thereby strengthening the overall set of responses to minimize the adverse health effects of climate change.
Bowen, Kathryn J.; Ebi, Kristie; Friel, Sharon; McMichael, Anthony J.
2013-01-01
Background Addressing climate change and its associated effects is a multi-dimensional and ongoing challenge. This includes recognizing that climate change will affect the health and wellbeing of all populations over short and longer terms, albeit in varied ways and intensities. That recognition has drawn attention to the need to take adaptive actions to lessen adverse impacts over the next few decades from unavoidable climate change, particularly in developing country settings. A range of sectors is responsible for appropriate adaptive policies and measures to address the health risks of climate change, including health services, water and sanitation, trade, agriculture, disaster management, and development. Objectives To broaden the framing of governance and decision-making processes by using innovative methods and assessments to illustrate the multi-sectoral nature of health-related adaptation to climate change. This is a shift from sector-specific to multi-level systems encompassing sectors and actors, across temporal and spatial scales. Design A review and synthesis of the current knowledge in the areas of health and climate change adaptation governance and decision-making processes. Results A novel framework is presented that incorporates social science insights into the formulation and implementation of adaptation activities and policies to lessen the health risks posed by climate change. Conclusion Clarification of the roles that different sectors, organizations, and individuals occupy in relation to the development of health-related adaptation strategies will facilitate the inclusion of health and wellbeing within multi-sector adaptation policies, thereby strengthening the overall set of responses to minimize the adverse health effects of climate change. PMID:24028938
2012-03-27
pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent
The governance dimensions of water security: a review.
Bakker, Karen; Morinville, Cynthia
2013-11-13
Water governance is critical to water security, and to the long-term sustainability of the Earth's freshwater systems. This review examines recent debates regarding the governance dimensions of water security, including adaptive governance, polycentric governance, social learning and multi-level governance. The analysis emphasizes the political and institutional dimensions of water governance, and explores the relevance of social power-an overlooked yet important aspect of the water security debate. In addition, the review explores the intersection and potential synergies between water governance perspectives and risk-based approaches to water security, and offers critiques and suggestions for further research questions and agendas.
Undocumented migration in response to climate change
Riosmena, Fernando; Hunter, Lori M.; Runfola, Daniel M.
2016-01-01
In the face of climate change induced economic uncertainty, households may employ migration as an adaptation strategy to diversify their livelihood portfolio through remittances. However, it is unclear whether such climate migration will be documented or undocumented. In this study we combine detailed migration histories with daily temperature and precipitation information for 214 weather stations to investigate whether climate change more strongly impacts undocumented or documented migration from 68 rural Mexican municipalities to the U.S. during the years 1986–1999. We employ two measures of climate change, the warm spell duration index (WSDI) and the precipitation during extremely wet days (R99PTOT). Results from multi-level event-history models demonstrate that climate-related international migration from rural Mexico was predominantly undocumented. We conclude that programs to facilitate climate change adaptation in rural Mexico may be more effective in reducing undocumented border crossings than increased border fortification. PMID:27570840
Yuen, Cynthia X; Fuligni, Andrew J; Gonzales, Nancy; Telzer, Eva H
2018-02-01
Youth who do not identify with or value their families (i.e., low family centrality) are considered to be at risk for maladjustment. However, the current study investigated whether low family centrality may be adaptive in negative family contexts (i.e., high family conflict) because youth's self-worth should be less tied to the quality of their family relationships. Multilevel models using daily diaries and latent variable interactions using longitudinal questionnaires indicated that, among a sample of 428 Mexican American adolescents (49.8% male, M age = 15.02 years), lower family centrality was generally detrimental to youth's well-being. However, for youth in adverse family environments, low family centrality ceased to function as a risk factor. The present findings suggest that family centrality values play a more nuanced role in youth well-being than previously believed, such that low family centrality may be an adaptive response to significant family challenges.
Undocumented migration in response to climate change.
Nawrotzki, Raphael J; Riosmena, Fernando; Hunter, Lori M; Runfola, Daniel M
In the face of climate change induced economic uncertainty, households may employ migration as an adaptation strategy to diversify their livelihood portfolio through remittances. However, it is unclear whether such climate migration will be documented or undocumented. In this study we combine detailed migration histories with daily temperature and precipitation information for 214 weather stations to investigate whether climate change more strongly impacts undocumented or documented migration from 68 rural Mexican municipalities to the U.S. during the years 1986-1999. We employ two measures of climate change, the warm spell duration index ( WSDI ) and the precipitation during extremely wet days ( R99PTOT ). Results from multi-level event-history models demonstrate that climate-related international migration from rural Mexico was predominantly undocumented. We conclude that programs to facilitate climate change adaptation in rural Mexico may be more effective in reducing undocumented border crossings than increased border fortification.
Multilevel Interventions: Measurement and Measures
Charns, Martin P.; Alligood, Elaine C.; Benzer, Justin K.; Burgess, James F.; Mcintosh, Nathalie M.; Burness, Allison; Partin, Melissa R.; Clauser, Steven B.
2012-01-01
Background Multilevel intervention research holds the promise of more accurately representing real-life situations and, thus, with proper research design and measurement approaches, facilitating effective and efficient resolution of health-care system challenges. However, taking a multilevel approach to cancer care interventions creates both measurement challenges and opportunities. Methods One-thousand seventy two cancer care articles from 2005 to 2010 were reviewed to examine the state of measurement in the multilevel intervention cancer care literature. Ultimately, 234 multilevel articles, 40 involving cancer care interventions, were identified. Additionally, literature from health services, social psychology, and organizational behavior was reviewed to identify measures that might be useful in multilevel intervention research. Results The vast majority of measures used in multilevel cancer intervention studies were individual level measures. Group-, organization-, and community-level measures were rarely used. Discussion of the independence, validity, and reliability of measures was scant. Discussion Measurement issues may be especially complex when conducting multilevel intervention research. Measurement considerations that are associated with multilevel intervention research include those related to independence, reliability, validity, sample size, and power. Furthermore, multilevel intervention research requires identification of key constructs and measures by level and consideration of interactions within and across levels. Thus, multilevel intervention research benefits from thoughtful theory-driven planning and design, an interdisciplinary approach, and mixed methods measurement and analysis. PMID:22623598
Pastor, Dena A; Lazowski, Rory A
2018-01-01
The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
Hernández-Delgado, E A
2015-12-15
Climate change has significantly impacted tropical ecosystems critical for sustaining local economies and community livelihoods at global scales. Coastal ecosystems have largely declined, threatening the principal source of protein, building materials, tourism-based revenue, and the first line of defense against storm swells and sea level rise (SLR) for small tropical islands. Climate change has also impacted public health (i.e., altered distribution and increased prevalence of allergies, water-borne, and vector-borne diseases). Rapid human population growth has exacerbated pressure over coupled social-ecological systems, with concomitant non-sustainable impacts on natural resources, water availability, food security and sovereignty, public health, and quality of life, which should increase vulnerability and erode adaptation and mitigation capacity. This paper examines cumulative and synergistic impacts of climate change in the challenging context of highly vulnerable small tropical islands. Multiple adaptive strategies of coupled social-ecological ecosystems are discussed. Multi-level, multi-sectorial responses are necessary for adaptation to be successful. Copyright © 2015 Elsevier Ltd. All rights reserved.
Adaptation of health care for migrants: whose responsibility?
Dauvrin, Marie; Lorant, Vincent
2014-07-08
In a context of increasing ethnic diversity, culturally competent strategies have been recommended to improve care quality and access to health care for ethnic minorities and migrants; their implementation by health professionals, however, has remained patchy. Most programs of cultural competence assume that health professionals accept that they have a responsibility to adapt to migrants, but this assumption has often remained at the level of theory. In this paper, we surveyed health professionals' views on their responsibility to adapt. Five hundred-and-sixty-nine health professionals from twenty-four inpatient and outpatient health services were selected according to their geographic location. All health care professionals were requested to complete a questionnaire about who should adapt to ethnic diversity: health professionals or patients. After a factorial analysis to identify the underlying responsibility dimensions, we performed a multilevel regression model in order to investigate individual and service covariates of responsibility attribution. Three dimensions emerged from the factor analysis: responsibility for the adaptation of communication, responsibility for the adaptation to the negotiation of values, and responsibility for the adaptation to health beliefs. Our results showed that the sense of responsibility for the adaptation of health care depended on the nature of the adaptation required: when the adaptation directly concerned communication with the patient, health professionals declared that they should be the ones to adapt; in relation to cultural preferences, however, the responsibility felt on the patient's shoulders. Most respondents were unclear in relation to adaptation to health beliefs. Regression indicated that being Belgian, not being a physician, and working in a primary-care service were associated with placing the burden of responsibility on the patient. Health care professionals do not consider it to be their responsibility to adapt to ethnic diversity. If health professionals do not feel a responsibility to adapt, they are less likely to be involved in culturally competent health care.
Adaptation of health care for migrants: whose responsibility?
2014-01-01
Background In a context of increasing ethnic diversity, culturally competent strategies have been recommended to improve care quality and access to health care for ethnic minorities and migrants; their implementation by health professionals, however, has remained patchy. Most programs of cultural competence assume that health professionals accept that they have a responsibility to adapt to migrants, but this assumption has often remained at the level of theory. In this paper, we surveyed health professionals’ views on their responsibility to adapt. Methods Five hundred-and-sixty-nine health professionals from twenty-four inpatient and outpatient health services were selected according to their geographic location. All health care professionals were requested to complete a questionnaire about who should adapt to ethnic diversity: health professionals or patients. After a factorial analysis to identify the underlying responsibility dimensions, we performed a multilevel regression model in order to investigate individual and service covariates of responsibility attribution. Results Three dimensions emerged from the factor analysis: responsibility for the adaptation of communication, responsibility for the adaptation to the negotiation of values, and responsibility for the adaptation to health beliefs. Our results showed that the sense of responsibility for the adaptation of health care depended on the nature of the adaptation required: when the adaptation directly concerned communication with the patient, health professionals declared that they should be the ones to adapt; in relation to cultural preferences, however, the responsibility felt on the patient’s shoulders. Most respondents were unclear in relation to adaptation to health beliefs. Regression indicated that being Belgian, not being a physician, and working in a primary-care service were associated with placing the burden of responsibility on the patient. Conclusions Health care professionals do not consider it to be their responsibility to adapt to ethnic diversity. If health professionals do not feel a responsibility to adapt, they are less likely to be involved in culturally competent health care. PMID:25005021
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
Dunn, Erin C.; Masyn, Katherine E.; Yudron, Monica; Jones, Stephanie M.; Subramanian, S.V.
2014-01-01
The observation that features of the social environment, including family, school, and neighborhood characteristics, are associated with individual-level outcomes has spurred the development of dozens of multilevel or ecological theoretical frameworks in epidemiology, public health, psychology, and sociology, among other disciplines. Despite the widespread use of such theories in etiological, intervention, and policy studies, challenges remain in bridging multilevel theory and empirical research. This paper set out to synthesize these challenges and provide specific examples of methodological and analytical strategies researchers are using to gain a more nuanced understanding of the social determinants of psychiatric disorders, with a focus on children’s mental health. To accomplish this goal, we begin by describing multilevel theories, defining their core elements, and discussing what these theories suggest is needed in empirical work. In the second part, we outline the main challenges researchers face in translating multilevel theory into research. These challenges are presented for each stage of the research process. In the third section, we describe two methods being used as alternatives to traditional multilevel modeling techniques to better bridge multilevel theory and multilevel research. These are: (1) multilevel factor analysis and multilevel structural equation modeling; and (2) dynamic systems approaches. Through its review of multilevel theory, assessment of existing strategies, and examination of emerging methodologies, this paper offers a framework to evaluate and guide empirical studies on the social determinants of child psychiatric disorders as well as health across the lifecourse. PMID:24469555
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
NASA Astrophysics Data System (ADS)
Ernst, K.; Preston, B. L.; Tenggren, S.; Klein, R.; Gerger-Swartling, Å.
2017-12-01
Many challenges to adaptation decision-making and action have been identified across peer-reviewed and gray literature. These challenges have primarily focused on the use of climate knowledge for adaptation decision-making, the process of adaptation decision-making, and the needs of the decision-maker. Studies on climate change knowledge systems often discuss the imperative role of climate knowledge producers in adaptation decision-making processes and stress the need for producers to engage in knowledge co-production activities and to more effectively meet decision-maker needs. While the influence of climate knowledge producers on the co-production of science for adaptation decision-making is well-recognized, hardly any research has taken a direct approach to analyzing the challenges that climate knowledge producers face when undertaking science co-production. Those challenges can influence the process of knowledge production and may hinder the creation, utilization, and dissemination of actionable knowledge for adaptation decision-making. This study involves semi-structured interviews, focus groups, and participant observations to analyze, identify, and contextualize the challenges that climate knowledge producers in Sweden face as they endeavor to create effective climate knowledge systems for multiple contexts, scales, and levels across the European Union. Preliminary findings identify complex challenges related to education, training, and support; motivation, willingness, and culture; varying levels of prioritization; professional roles and responsibilities; the type and amount of resources available; and professional incentive structures. These challenges exist at varying scales and levels across individuals, organizations, networks, institutions, and disciplines. This study suggests that the creation of actionable knowledge for adaptation decision-making is not supported across scales and levels in the climate knowledge production landscape. Additionally, enabling the production of actionable knowledge for adaptation decision-making requires multi-level effort beyond the individual level.
A mixed integer bi-level DEA model for bank branch performance evaluation by Stackelberg approach
NASA Astrophysics Data System (ADS)
Shafiee, Morteza; Lotfi, Farhad Hosseinzadeh; Saleh, Hilda; Ghaderi, Mehdi
2016-03-01
One of the most complicated decision making problems for managers is the evaluation of bank performance, which involves various criteria. There are many studies about bank efficiency evaluation by network DEA in the literature review. These studies do not focus on multi-level network. Wu (Eur J Oper Res 207:856-864, 2010) proposed a bi-level structure for cost efficiency at the first time. In this model, multi-level programming and cost efficiency were used. He used a nonlinear programming to solve the model. In this paper, we have focused on multi-level structure and proposed a bi-level DEA model. We then used a liner programming to solve our model. In other hand, we significantly improved the way to achieve the optimum solution in comparison with the work by Wu (2010) by converting the NP-hard nonlinear programing into a mixed integer linear programming. This study uses a bi-level programming data envelopment analysis model that embodies internal structure with Stackelberg-game relationships to evaluate the performance of banking chain. The perspective of decentralized decisions is taken in this paper to cope with complex interactions in banking chain. The results derived from bi-level programming DEA can provide valuable insights and detailed information for managers to help them evaluate the performance of the banking chain as a whole using Stackelberg-game relationships. Finally, this model was applied in the Iranian bank to evaluate cost efficiency.
NASA Astrophysics Data System (ADS)
Wedding, L.; Hartge, E. H.; Guannel, G.; Melius, M.; Reiter, S. M.; Ruckelshaus, M.; Guerry, A.; Caldwell, M.
2014-12-01
To support decision-makers in their efforts to manage coastal resources in a changing climate the Natural Capital Project and the Center for Ocean Solutions are engaging in, informing, and helping to shape climate adaptation planning at various scales throughout coastal California. Our team is building collaborations with regional planners and local scientific and legal experts to inform local climate adaptation decisions that might minimize the economic and social losses associated with rising seas and more damaging storms. Decision-makers are considering engineered solutions (e.g. seawalls), natural solutions (e.g. dune or marsh restoration), and combinations of the two. To inform decisions about what kinds of solutions might best work in specific locations, we are comparing alternate climate and adaptation scenarios. We will present results from our use of the InVEST ecosystem service models in Sonoma County, with an initial focus on protection from coastal hazards due to erosion and inundation. By strategically choosing adaptation alternatives, communities and agencies can work to protect people and property while also protecting or restoring dwindling critical habitat and the full suite of benefits those habitats provide to people.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Multiscale Multilevel Approach to Solution of Nanotechnology Problems
NASA Astrophysics Data System (ADS)
Polyakov, Sergey; Podryga, Viktoriia
2018-02-01
The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.
Huai, Jianjun
2016-01-01
Although the integrated indicator methods have become popular for assessing vulnerability to climate change, their proliferation has introduced a confusing array of scales and indicators that cause a science-policy gap. I argue for a clear adaptation pathway in an “integrative typology” of regional vulnerability that matches appropriate scales, optimal measurements and adaptive strategies in a six-dimensional and multi-level analysis framework of integration and typology inspired by the “5W1H” questions: “Who is concerned about how to adapt to the vulnerability of what to what in some place (where) at some time (when)?” Using the case of the vulnerability of wheat, barley and oats to drought in Australian wheat sheep zones during 1978–1999, I answer the “5W1H” questions through establishing the “six typologies” framework. I then optimize the measurement of vulnerability through contrasting twelve kinds of vulnerability scores with the divergence of crops yields from their regional mean. Through identifying the socioeconomic constraints, I propose seven generic types of crop-drought vulnerability and local adaptive strategy. Our results illustrate that the process of assessing vulnerability and selecting adaptations can be enhanced using a combination of integration, optimization and typology, which emphasize dynamic transitions and transformations between integration and typology. PMID:27670975
Quality assessment and control of finite element solutions
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Babuska, Ivo
1987-01-01
Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.
Barriers to Uptake of Conservation Agriculture in southern Africa: Multi-level Analyses from Malawi
NASA Astrophysics Data System (ADS)
Dougill, Andrew; Stringer, Lindsay; Whitfield, Stephen; Wood, Ben; Chinseu, Edna
2015-04-01
Conservation agriculture is a key set of actions within the growing body of climate-smart agriculture activities being advocated and rolled out across much of the developing world. Conservation agriculture has purported benefits for environmental quality, food security and the sustained delivery of ecosystem services. In this paper, new multi-level analyses are presented, assessing the current barriers to adoption of conservation agriculture practices in Malawi. Despite significant donor initiatives that have targeted conservation agriculture projects, uptake rates remain low. This paper synthesises studies from across 3 levels in Malawi: i.) national level- drawing on policy analysis, interviews and a multi-stakeholder workshop; ii.) district level - via assessments of development plans and District Office and extension service support, and; iii) local level - through data gained during community / household level studies in Dedza District that have gained significant donor support for conservation agriculture as a component of climate smart agriculture initiatives. The national level multi-stakeholder Conservation Agriculture workshop identified three areas requiring collaborative research and outlined routes for the empowerment of the National Conservation Agriculture Task Force to advance uptake of conservation agriculture and deliver associated benefits in terms of agricultural development, climate adaptation and mitigation. District level analyses highlight that whilst District Development Plans are now checked against climate change adaptation and mitigation criteria, capacity and knowledge limitations exist at the District level, preventing project interventions from being successfully up-scaled. Community level assessments highlight the need for increased community participation at the project-design phase and identify a pressing requirement for conservation agriculture planning processes (in particular those driven by investments in climate-smart agriculture) to better accommodate, and respond to, the differentiated needs of marginalised groups (e.g. poor, elderly, carers). We identify good practices that can be used to design, plan and implement conservation agriculture projects such that the multiple benefits can be realised. We further outline changes to multi-level policy and institutional arrangements to facilitate greater adoption of conservation agriculture in Malawi, noting the vital importance of District-level institutions and amendments and capacity building required within agricultural extension services. We highlight the need for capacity building and support to ensure conservation agriculture's multiple benefits are realised more widely as a route towards sustainable land management.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
NASA Astrophysics Data System (ADS)
Zuiker, Steven; Reid Whitaker, J.
2014-04-01
This paper describes the 5E+I/A inquiry model and reports a case study of one curricular enactment by a US fifth-grade classroom. A literature review establishes the model's conceptual adequacy with respect to longstanding research related to both the 5E inquiry model and multiple, incremental innovations of it. As a collective line of research, the review highlights a common emphasis on formative assessment, at times coupled either with differentiated instruction strategies or with activities that target the generalization of learning. The 5E+I/A model contributes a multi-level assessment strategy that balances formative and summative functions of multiple forms of assessment in order to support classroom participation while still attending to individual achievement. The case report documents the enactment of a weeklong 5E+I/A curricular design as a preliminary account of the model's empirical adequacy. A descriptive and analytical narrative illustrates variable ways that multi-level assessment makes student thinking visible and pedagogical decision-making more powerful. In light of both, it also documents productive adaptations to a flexible curricular design and considers future research to advance this collective line of inquiry.
Evolution of neuroarchitecture, multi-level analyses and calibrative reductionism
Berntson, Gary G.; Norman, Greg J.; Hawkley, Louise C.; Cacioppo, John T.
2012-01-01
Evolution has sculpted the incredibly complex human nervous system, among the most complex functions of which extend beyond the individual to an intricate social structure. Although these functions are deterministic, those determinants are legion, heavily interacting and dependent on a specific evolutionary trajectory. That trajectory was directed by the adaptive significance of quasi-random genetic variations, but was also influenced by chance and caprice. With a different evolutionary pathway, the same neural elements could subserve functions distinctly different from what they do in extant human brains. Consequently, the properties of higher level neural networks cannot be derived readily from the properties of the lower level constituent elements, without studying these elements in the aggregate. Thus, a multi-level approach to integrative neuroscience may offer an optimal strategy. Moreover, the process of calibrative reductionism, by which concepts and understandings from one level of organization or analysis can mutually inform and ‘calibrate’ those from other levels (both higher and lower), may represent a viable approach to the application of reductionism in science. This is especially relevant in social neuroscience, where the basic subject matter of interest is defined by interacting organisms across diverse environments. PMID:23386961
Generative mechanistic explanation building in undergraduate molecular and cellular biology
NASA Astrophysics Data System (ADS)
Southard, Katelyn M.; Espindola, Melissa R.; Zaepfel, Samantha D.; Bolger, Molly S.
2017-09-01
When conducting scientific research, experts in molecular and cellular biology (MCB) use specific reasoning strategies to construct mechanistic explanations for the underlying causal features of molecular phenomena. We explored how undergraduate students applied this scientific practice in MCB. Drawing from studies of explanation building among scientists, we created and applied a theoretical framework to explore the strategies students use to construct explanations for 'novel' biological phenomena. Specifically, we explored how students navigated the multi-level nature of complex biological systems using generative mechanistic reasoning. Interviews were conducted with introductory and upper-division biology students at a large public university in the United States. Results of qualitative coding revealed key features of students' explanation building. Students used modular thinking to consider the functional subdivisions of the system, which they 'filled in' to varying degrees with mechanistic elements. They also hypothesised the involvement of mechanistic entities and instantiated abstract schema to adapt their explanations to unfamiliar biological contexts. Finally, we explored the flexible thinking that students used to hypothesise the impact of mutations on multi-leveled biological systems. Results revealed a number of ways that students drew mechanistic connections between molecules, functional modules (sets of molecules with an emergent function), cells, tissues, organisms and populations.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Smokowski, Paul R; Guo, Shenyang; Rose, Roderick; Evans, Caroline B R; Cotter, Katie L; Bacallao, Martica
2014-11-01
The current study filled significant gaps in our knowledge of developmental psychopathology by examining the influence of multilevel risk factors and developmental assets on longitudinal trajectories of internalizing symptoms and self-esteem in an exceptionally culturally diverse sample of rural adolescents. Integrating ecological and social capital theories, we explored if positive microsystem transactions are associated with self-esteem while negative microsystem transactions increase the chances of internalizing problems. Data came from the Rural Adaptation Project, a 5-year longitudinal panel study of more than 4,000 middle school students from 28 public schools in two rural, disadvantaged counties in North Carolina. Three-level hierarchical linear modeling models were estimated to predict internalizing symptoms (e.g., depression, anxiety) and self-esteem. Relative to other students, risk for internalizing problems and low self-esteem was elevated for aggressive adolescents, students who were hassled or bullied at school, and those who were rejected by peers or in conflict with their parents. Internalizing problems were also more common among adolescents from socioeconomically disadvantaged families and neighborhoods, among those in schools with more suspensions, in students who reported being pressured by peers, and in youth who required more teacher support. It is likely that these experiences left adolescents disengaged from developing social capital from ecological microsystems (e.g., family, school, peers). On the positive side, support from parents and friends and optimism about the future were key assets associated with lower internalizing symptoms and higher self-esteem. Self-esteem was also positively related to religious orientation, school satisfaction, and future optimism. These variables show active engagement with ecological microsystems. The implications and limitations were discussed.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
A multilevel preconditioner for domain decomposition boundary systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.
1991-12-11
In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.
Social stratification, classroom climate, and the behavioral adaptation of kindergarten children.
Boyce, W Thomas; Obradovic, Jelena; Bush, Nicole R; Stamperdahl, Juliet; Kim, Young Shin; Adler, Nancy
2012-10-16
Socioeconomic status (SES) is the single most potent determinant of health within human populations, from infancy through old age. Although the social stratification of health is nearly universal, there is persistent uncertainty regarding the dimensions of SES that effect such inequalities and thus little clarity about the principles of intervention by which inequalities might be abated. Guided by animal models of hierarchical organization and the health correlates of subordination, this prospective study examined the partitioning of children's adaptive behavioral development by their positions within kindergarten classroom hierarchies. A sample of 338 5-y-old children was recruited from 29 Berkeley, California public school classrooms. A naturalistic observational measure of social position, parent-reported family SES, and child-reported classroom climate were used in estimating multilevel, random-effects models of children's adaptive behavior at the end of the kindergarten year. Children occupying subordinate positions had significantly more maladaptive behavioral outcomes than their dominant peers. Further, interaction terms revealed that low family SES and female sex magnified, and teachers' child-centered pedagogical practices diminished, the adverse influences of social subordination. Taken together, results suggest that, even within early childhood groups, social stratification is associated with a partitioning of adaptive behavioral outcomes and that the character of larger societal and school structures in which such groups are nested can moderate rank-behavior associations.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Parra-Cardona, J Rubén; Bybee, Deborah; Sullivan, Cris M; Rodríguez, Melanie M Domenech; Dates, Brian; Tams, Lisa; Bernal, Guillermo
2017-01-01
There is a dearth of empirical studies aimed at examining the impact of differential cultural adaptation of evidence-based clinical and prevention interventions. This prevention study consisted of a randomized controlled trial aimed at comparing the impact of 2 differentially culturally adapted versions of the evidence-based parenting intervention known as Parent Management Training, the Oregon Model (PMTOR). The sample consisted of 103 Latina/o immigrant families (190 individual parents). Each family was allocated to 1 of 3 conditions: (a) a culturally adapted PMTO (CA), (b) culturally adapted and enhanced PMTO (CE), and (c) a wait-list control. Measurements were implemented at baseline (T1), treatment completion (T2) and 6-month follow up (T3). Multilevel growth modeling analyses indicated statistically significant improvements on parenting skills for fathers and mothers (main effect) at 6-month follow-up in both adapted interventions, when compared with the control condition. With regard to parent-reported child behaviors, child internalizing behaviors were significantly lower for both parents in the CE intervention (main effect), compared with control at 6-month follow-up. No main effect was found for child externalizing behaviors. However, a Parent × Condition effect was found indicating a significant reduction of child externalizing behaviors for CE fathers compared with CA and control fathers at posttest and 6-month follow-up. Present findings indicate the value of differential cultural adaptation research designs and the importance of examining effects for both mothers and fathers, particularly when culturally focused and gender variables are considered for intervention design and implementation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
New self-magnetically insulated connection of multilevel accelerators to a common load
VanDevender, J. Pace; Langston, William L.; Pasik, Michael F.; ...
2015-03-04
A new way to connect pulsed-power modules to a common load is presented. Unlike previous connectors, the clam shell magnetically insulated transmission line (CSMITL) has magnetic nulls only at large radius where the cathode electric field is kept below the threshold for emission, has only a simply connected magnetic topology to avoid plasma motion along magnetic field lines into highly stressed gaps, and has electron injectors that ensure efficient electron flow even in the limiting case of self-limited MITLs. Multilevel magnetically insulated transmission lines with a posthole convolute are the standard solution but associated losses limit the performance of state-of-the-artmore » accelerators. Mitigating these losses is critical for the next generation of pulsed-power accelerators. A CSMITL has been successfully implemented on the Saturn accelerator. A reference design for the Z accelerator is derived and presented. The design conservatively meets the design requirements and shows excellent transport efficiency in three simulations of increasing complexity: circuit simulations, electromagnetic fields only with Emphasis, fields plus electron and ion emission with Quicksilver.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Boundary based on exchange symmetry theory for multilevel simulations. I. Basic theory.
Shiga, Motoyuki; Masia, Marco
2013-07-28
In this paper, we lay the foundations for a new method that allows multilevel simulations of a diffusive system, i.e., a system where a flux of particles through the boundaries might disrupt the primary region. The method is based on the use of flexible restraints that maintain the separation between inner and outer particles. It is shown that, by introducing a bias potential that accounts for the exchange symmetry of the system, the correct statistical distribution is preserved. Using a toy model consisting of non-interacting particles in an asymmetric potential well, we prove that the method is formally exact, and that it could be simplified by considering only up to a couple of particle exchanges without a loss of accuracy. A real-world test is then made by considering a hybrid MM(∗)/MM calculation of cesium ion in water. In this case, the single exchange approximation is sound enough that the results superimpose to the exact solutions. Potential applications of this method to many different hybrid QM/MM systems are discussed, as well as its limitations and strengths in comparison to existing approaches.
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
Multilevel structural equation models for assessing moderation within and across levels of analysis.
Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J
2016-06-01
Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Rush, Christina L.; Darling, Margaret; Elliott, Maria Gloria; Febus-Sampayo, Ivis; Kuo, Charlene; Muñoz, Juliana; Duron, Ysabel; Torres, Migdalia; Galván, Claudia Campos; Gonzalez, Florencia; Caicedo, Larisa; Nápoles, Anna; Jensen, Roxanne E.; Anderson, Emily; Graves, Kristi D.
2014-01-01
Introduction Few studies have evaluated interventions to improve quality of life (QOL) for Latina breast cancer survivors and caregivers. Following best practices in community-based participatory research (CBPR), we established a multi-level partnership among Latina survivors, caregivers, community-based organizations (CBOs), clinicians and researchers to evaluate a survivor-caregiver QOL intervention. Methods A CBO in the mid-Atlantic region, Nueva Vida, developed a patient-caregiver program called Cuidando a mis Cuidadores (Caring for My Caregivers), to improve outcomes important to Latina cancer survivors and their families. Together with an academic partner, Nueva Vida and 3 CBOs established a multi-level team of researchers, clinicians, Latina cancer survivors, and caregivers to conduct a national randomized trial to compare the patient-caregiver program to usual care. Results Incorporating team feedback and programmatic considerations, we adapted the prior patient-caregiver program into an 8-session patient- and caregiver-centered intervention that includes skill-building workshops such as managing stress, communication, self-care, social well-being, and impact of cancer on sexual intimacy. We will measure QOL domains with the Patient-Reported Outcomes Measurement Information System (PROMIS), dyadic communication between the survivor and caregiver, and survivors’ adherence to recommended cancer care. To integrate the intervention within each CBO, we conducted interactive training on the protection of human subjects, qualitative interviewing, and intervention delivery. Conclusion The development and engagement process for our QOL intervention study is innovative because it is both informed by and directly impacts underserved Latina survivors and caregivers. The CBPR-based process demonstrates successful multi-level patient engagement through collaboration among researchers, clinicians, community partners, survivors and caregivers. PMID:25377349
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Dark channels in resonant tunneling transport through artificial atoms.
Vaz, Eduardo; Kyriakidis, Jordan
2008-07-14
We investigate sequential tunneling through a multilevel quantum dot confining multiple electrons in the regime where several channels are available for transport within the bias window. By analyzing solutions to the master equations of the reduced density matrix, we give general conditions on when the presence of a second transport channel in the bias window quenches transport through the quantum dot. These conditions are in terms of distinct tunneling anisotropies which may aid in explaining the occurrence of negative differential conductance in quantum dots in the nonlinear regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin Whiting; Crane, Nathan K.; Heinstein, Martin W.
2011-03-01
Adagio is a Lagrangian, three-dimensional, implicit code for the analysis of solids and structures. It uses a multi-level iterative solver, which enables it to solve problems with large deformations, nonlinear material behavior, and contact. It also has a versatile library of continuum and structural elements, and an extensive library of material models. Adagio is written for parallel computing environments, and its solvers allow for scalable solutions of very large problems. Adagio uses the SIERRA Framework, which allows for coupling with other SIERRA mechanics codes. This document describes the functionality and input structure for Adagio.
An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem
NASA Astrophysics Data System (ADS)
Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab
2018-05-01
This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.
Multilevel SEM Strategies for Evaluating Mediation in Three-Level Data
ERIC Educational Resources Information Center
Preacher, Kristopher J.
2011-01-01
Strategies for modeling mediation effects in multilevel data have proliferated over the past decade, keeping pace with the demands of applied research. Approaches for testing mediation hypotheses with 2-level clustered data were first proposed using multilevel modeling (MLM) and subsequently using multilevel structural equation modeling (MSEM) to…
Enhancing data locality by using terminal propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, B.; Leland, R.; Van Driessche, R.
1995-12-31
Terminal propagation is a method developed in the circuit placement community for adding constraints to graph partitioning problems. This paper adapts and expands this idea, and applies it to the problem of partitioning data structures among the processors of a parallel computer. We show how the constraints in terminal propagation can be used to encourage partitions in which messages are communicated only between architecturally near processors. We then show how these constraints can be handled in two important partitioning algorithms, spectral bisection and multilevel-KL. We compare the quality of partitions generated by these algorithms to each other and to Partitionsmore » generated by more familiar techniques.« less
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Formulation and Application of the Generalized Multilevel Facets Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chih-Yu
2007-01-01
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
Kim, Eun Sook; Cao, Chunhua
2015-01-01
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
Development of a multilevel health and safety climate survey tool within a mining setting.
Parker, Anthony W; Tones, Megan J; Ritchie, Gabrielle E
2017-09-01
This study aimed to design, implement and evaluate the reliability and validity of a multifactorial and multilevel health and safety climate survey (HSCS) tool with utility in the Australian mining setting. An 84-item questionnaire was developed and pilot tested on a sample of 302 Australian miners across two open cut sites. A 67-item, 10 factor solution was obtained via exploratory factor analysis (EFA) representing prioritization and attitudes to health and safety across multiple domains and organizational levels. Each factor demonstrated a high level of internal reliability, and a series of ANOVAs determined a high level of consistency in responses across the workforce, and generally irrespective of age, experience or job category. Participants tended to hold favorable views of occupational health and safety (OH&S) climate at the management, supervisor, workgroup and individual level. The survey tool demonstrated reliability and validity for use within an open cut Australian mining setting and supports a multilevel, industry specific approach to OH&S climate. Findings suggested a need for mining companies to maintain high OH&S standards to minimize risks to employee health and safety. Future research is required to determine the ability of this measure to predict OH&S outcomes and its utility within other mine settings. As this tool integrates health and safety, it may have benefits for assessment, monitoring and evaluation in the industry, and improving the understanding of how health and safety climate interact at multiple levels to influence OH&S outcomes. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Uthman, Olalekan A; Kayode, Gbenga A; Adekanmbi, Victor T
2013-12-01
Nigeria has the highest number of people living with HIV/AIDS in the world after India and South Africa. HIV/AIDS places a considerable burden on society's resources, and its prevention is a cost-beneficial solution to address these consequences. To the best of our knowledge, there has been no multilevel study performed to date that examined the separate and independent associations of individual and community socioeconomic status (SES) with HIV prevention knowledge in Nigeria. Multilevel linear regression models were applied to the 2008 Nigeria Demographic and Health Survey on 48871 respondents (Level 1) nested within 886 communities (Level 2) from 37 districts (Level 3). Approximately one-fifth (20%) of respondents were not aware of any of the Abstinence, Being faithful and Condom use (ABC) approach of preventing the sexual transmission of HIV. However, the likelihood of being aware of the ABC approach of preventing the sexual transmission of HIV increased with older age, male gender, greater education attainment, a higher wealth index, living in an urban area and being from least socioeconomically disadvantaged communities. There were significant community and district variations in respondents' knowledge of the ABC approach of preventing the sexual transmission of HIV. The present study provides evidence that both individual- and community-level SES factors are important predictors of knowledge of the ABC approach of preventing the sexual transmission of HIV in Nigeria. The findings underscore the need to implement public health prevention strategies not only at the individual level, but also at the community level.
ERIC Educational Resources Information Center
Sun, Shuyan; Pan, Wei
2014-01-01
As applications of multilevel modelling in educational research increase, researchers realize that multilevel data collected in many educational settings are often not purely nested. The most common multilevel non-nested data structure is one that involves student mobility in longitudinal studies. This article provides a methodological review of…
ERIC Educational Resources Information Center
Lee, Woo-yeol; Cho, Sun-Joo
2017-01-01
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Alternative Methods for Assessing Mediation in Multilevel Data: The Advantages of Multilevel SEM
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zhang, Zhen; Zyphur, Michael J.
2011-01-01
Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's…
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Construction of Covariance Functions with Variable Length Fields
NASA Technical Reports Server (NTRS)
Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven
2005-01-01
This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.
Hahn, Elizabeth A.; Lachman, Margie E.
2014-01-01
The present study examined the role of long-term working memory decline in the relationship between everyday experiences of memory problems and perceived control, and we also considered whether the use of accommodative strategies [selective optimization with compensation (SOC)] would be adaptive. The study included Boston-area participants (n=103) from the Midlife in the United States study (MIDUS) who completed two working memory assessments over ten years and weekly diaries following Time 2. In adjusted multi-level analyses, greater memory decline and lower general perceived control were associated with more everyday memory problems. Low perceived control reported in a weekly diary was associated with more everyday memory problems among those with greater memory decline and low SOC strategy use (Est.=−0.28, SE=0.13, p=.036). These results suggest that the use of SOC strategies in the context of declining memory may help to buffer the negative effects of low perceived control on everyday memory. PMID:24597768
Wills, Jeremiah B; Brauer, Jonathan R
2012-03-01
Drawing on previous theoretical and empirical work, we posit that maternal employment influences on child well-being vary across birth cohorts. We investigate this possibility by analyzing longitudinal data from a sample of children and their mothers drawn from the National Longitudinal Survey of Youth. We introduce a series of age, cohort, and maternal employment interaction terms into multilevel models predicting child well-being to assess whether any potential short-term or long-term effects of early and current maternal employment vary across birth cohorts. Results indicate that maternal employment largely is inconsequential to child well-being regardless of birth cohort, with a few exceptions. For instance, children born in earlier cohorts may have experienced long-term positive effects of having an employed mother; however, as maternal employment became more commonplace in recent cohorts, these beneficial effects appear to have disappeared. We discuss theoretical and methodological implications of these findings. Copyright © 2011 Elsevier Inc. All rights reserved.
Effects of Normal Aging on Visuo-Motor Plasticity
NASA Technical Reports Server (NTRS)
Roller, Carrie A.; Cohen, Helen S.; Kimball, Kay T.; Bloomberg, Jacob J.
2001-01-01
Normal aging is associated with declines in neurologic function. Uncompensated visual and vestibular problems may have dire consequences including dangerous falls. Visuomotor plasticity is a form of behavioral neural plasticity which is important in the process of adapting to visual or vestibular alteration, including those changes due to pathology, pharmacotherapy, surgery or even entry into a microgravity or underwater environment. In order to determine the effects of aging on visuomotor plasticity, we chose the simple and easily measured paradigm of visual-motor re-arrangement created by using visual displacement prisms while throwing small balls at a target. Subjects threw balls before, during and after wearing a set of prisms which displace the visual scene by twenty degrees to the right. Data obtained during adaptation were modeled using multilevel analyses for 73 subjects aged 20 to 80 years. We found no statistically significant difference in measures of visuomotor plasticity with advancing age. Further studies are underway examining variable practice training as a potential mechanism for enhancing this form of behavioral neural plasticity.
Effects of normal aging on visuo-motor plasticity
NASA Technical Reports Server (NTRS)
Roller, Carrie A.; Cohen, Helen S.; Kimball, Kay T.; Bloomberg, Jacob J.
2002-01-01
Normal aging is associated with declines in neurologic function. Uncompensated visual and vestibular problems may have dire consequences including dangerous falls. Visuo-motor plasticity is a form of behavioral neural plasticity, which is important in the process of adapting to visual or vestibular alteration, including those changes due to pathology, pharmacotherapy, surgery or even entry into microgravity or an underwater environment. To determine the effects of aging on visuo-motor plasticity, we chose the simple and easily measured paradigm of visual-motor rearrangement created by using visual displacement prisms while throwing small balls at a target. Subjects threw balls before, during and after wearing a set of prisms which displace the visual scene by twenty degrees to the right. Data obtained during adaptation were modeled using multilevel modeling techniques for 73 subjects, aged 20 to 80 years. We found no statistically significant difference in measures of visuo-motor plasticity with advancing age. Further studies are underway examining variable practice training as a potential mechanism for enhancing this form of behavioral neural plasticity.
Sohng, Hee Yon; Kuniyuki, Alan; Edelson, Jane; Weir, Rosy Chang; Song, Hui; Tu, Shin-Ping
2013-01-01
Understanding and enhancing change capabilities, including Practice Adaptive Reserve (PAR), of Community Health Centers (CHCs) may mitigate cancer-related health disparities. Using stratified random sampling, we recruited 232 staff from seven CHCs serving Asian Pacific Islander communities to complete a self-administered survey. We performed multilevel regression analyses to examine PAR composite scores by CHC, position type, and number of years worked at their clinic. The mean PAR score was 0.7 (s.d. 0.14). Higher scores were associated with a greater perceived likelihood that clinic staff would participate in an evidence-based intervention (EBI). Constructs such as communication, clinic flow, sensemaking, change valence, and resource availability were positively associated with EBI implementation or trended toward significance. PAR scores are positively associated with perceived likelihood of clinic staff participation in cancer screening EBI. Future research is needed to determine PAR levels most conducive to implementing change and to developing interventions that enhance Adaptive Reserve.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Hahn, Elizabeth A; Lachman, Margie E
2015-01-01
The present study examined the role of long-term working memory decline in the relationship between everyday experiences of memory problems and perceived control, and we also considered whether the use of accommodative strategies [selective optimization with compensation (SOC)] would be adaptive. The study included Boston-area participants (n = 103) from the Midlife in the United States study (MIDUS) who completed two working memory assessments over 10 years and weekly diaries following Time 2. In adjusted multi-level analyses, greater memory decline and lower general perceived control were associated with more everyday memory problems. Low perceived control reported in a weekly diary was associated with more everyday memory problems among those with greater memory decline and low SOC strategy use (Est. = -0.28, SE= 0.13, p = .036). These results suggest that the use of SOC strategies in the context of declining memory may help to buffer the negative effects of low perceived control on everyday memory.
A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2001-01-01
This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.
Lee, Rebekka M; Ramanadhan, Shoba; Kruse, Gina R; Deutsch, Charles
2018-01-01
Background: Strong partnerships are critical to integrate evidence-based prevention interventions within clinical and community-based settings, offering multilevel and sustainable solutions to complex health issues. As part of Massachusetts' 2012 health reform, The Prevention and Wellness Trust Fund (PWTF) funded nine local partnerships throughout the state to address hypertension, pediatric asthma, falls among older adults, and tobacco use. The initiative was designed to improve health outcomes through prevention and disease management strategies and reduce healthcare costs. Purpose: Describe the mixed-methods study design for investigating PWTF implementation. Methods: The Consolidated Framework for Implementation Research guided the development of this evaluation. First, the study team conducted semi-structured qualitative interviews with leaders from each of nine partnerships to document partnership development and function, intervention adaptation and delivery, and the influence of contextual factors on implementation. The interview findings were used to develop a quantitative survey to assess the implementation experiences of 172 staff from clinical and community-based settings and a social network analysis to assess changes in the relationships among 72 PWTF partner organizations. The quantitative survey data on ratings of perceived implementation success were used to purposively select 24 staff for interviews to explore the most successful experiences of implementing evidence-based interventions for each of the four conditions. Conclusions: This mixed-methods approach for evaluation of implementation of evidence-based prevention interventions by PWTF partnerships can help decision-makers set future priorities for implementing and assessing clinical-community partnerships focused on prevention.
Multilevel Sequential2 Monte Carlo for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth
2018-09-01
The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.
Ultra-long fiber Raman lasers: design considerations
NASA Astrophysics Data System (ADS)
Koltchanov, I.; Kroushkov, D. I.; Richter, A.
2015-03-01
In frame of the European Marie Currie project GRIFFON [http://astonishgriffon.net/] the usage of a green approach in terms of reduced power consumption and maintenance costs is envisioned for long-span fiber networks. This shall be accomplished by coherent transmission in unrepeatered links (100 km - 350 km) utilizing ultra-long Raman fiber laser (URFL)-based distributed amplification, multi-level modulation formats, and adapted Digital Signal Processing (DSP) algorithms. The URFL uses a cascaded 2-order pumping scheme where two (co- and counter-) ˜ 1365 nm pumps illuminate the fiber. The URFL oscillates at ˜ 1450 nm whereas amplification is provided by stimulated Raman scattering (SRS) of the ˜ 1365 nm pumps and the optical feedback is realized by two Fiber Bragg gratings (FBGs) at the fiber ends reflecting at 1450 nm. The light field at 1450 nm provides amplification for signal waves in the 1550 nm range due to SRS. In this work we present URFL design studies intended to characterize and optimize the power and noise characteristics of the fiber links. We use a bidirectional fiber model describing propagation of the signal, pump and noise powers along the fiber length. From the numerical solution we evaluate the on/off Raman gain and its bandwidth, the signal excursion over the fiber length, OSNR spectra, and the accumulated nonlinearities. To achieve best performance for these characteristics the laser design is optimized with respect to the forward/backward pump powers and wavelengths, input/output signal powers, reflectivity profile of the FBGs and other parameters.
Resche-Rigon, Matthieu; White, Ian R
2018-06-01
In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.
DC-DC Type High-Frequency Link DC for Improved Power Quality of Cascaded Multilevel Inverter
NASA Astrophysics Data System (ADS)
Sadikin, Muhammad; Senjyu, Tomonobu; Yona, Atsushi
2013-06-01
Multilevel inverters are emerging as a new breed of power converter options for power system applications. Recent advances in power switching devices enabled the suitability of multilevel inverters for high voltage and high power applications because they are connecting several devices in series without the need of component matching. Usually, a transformerless battery energy storage system, based on a cascaded multilevel inverter, is used as a measure for voltage and frequency deviations. System can be reduced in size, weight, and cost of energy storage system. High-frequency link circuit topology is advantageous in realizing compact and light-weight power converters for uninterruptible power supply systems, new energy systems using photovoltaic-cells, fuel-cells and so on. This paper presents a DC-DC type high-frequency link DC (HFLDC) cascaded multilevel inverter. Each converter cell is implemented a control strategy for two H-bridge inverters that are controlled with the same multicarrier pulse width modulation (PWM) technique. The proposed cascaded multilevel inverter generates lower voltage total harmonic distortion (THD) in comparison with conventional cascaded multilevel inverter. Digital simulations are carried out using PSCAD/EMTDC to validate the performance of the proposed cascaded multilevel inverter.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.
NASA Astrophysics Data System (ADS)
Aftosmis, Michael J.
1992-10-01
A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
Social stratification, classroom climate, and the behavioral adaptation of kindergarten children
Boyce, W. Thomas; Obradović, Jelena; Bush, Nicole R.; Stamperdahl, Juliet; Kim, Young Shin; Adler, Nancy
2012-01-01
Socioeconomic status (SES) is the single most potent determinant of health within human populations, from infancy through old age. Although the social stratification of health is nearly universal, there is persistent uncertainty regarding the dimensions of SES that effect such inequalities and thus little clarity about the principles of intervention by which inequalities might be abated. Guided by animal models of hierarchical organization and the health correlates of subordination, this prospective study examined the partitioning of children's adaptive behavioral development by their positions within kindergarten classroom hierarchies. A sample of 338 5-y-old children was recruited from 29 Berkeley, California public school classrooms. A naturalistic observational measure of social position, parent-reported family SES, and child-reported classroom climate were used in estimating multilevel, random-effects models of children's adaptive behavior at the end of the kindergarten year. Children occupying subordinate positions had significantly more maladaptive behavioral outcomes than their dominant peers. Further, interaction terms revealed that low family SES and female sex magnified, and teachers’ child-centered pedagogical practices diminished, the adverse influences of social subordination. Taken together, results suggest that, even within early childhood groups, social stratification is associated with a partitioning of adaptive behavioral outcomes and that the character of larger societal and school structures in which such groups are nested can moderate rank–behavior associations. PMID:23045637
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Serazio, C.; Chacon, L.; Lapenta, G.
2006-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)
Onishi, Alex C; Ashraf, Mohammed; Soetikno, Brian T; Fawzi, Amani A
2018-04-10
To examine the relationship between ischemia and disorganization of the retinal inner layers (DRIL). Cross-sectional retrospective study of 20 patients (22 eyes) with diabetic retinopathy presenting to a tertiary academic referral center, who had DRIL on structural optical coherence tomography (OCT) using Spectralis HRA + OCT (Heidelberg Engineering, Heidelberg, Germany) and OCT angiography with XR Avanti (Optovue Inc, Fremont, CA) on the same day. Optical coherence tomography angiography images were further processed to remove flow signal projection artifacts using a software algorithm adapted from recent studies. Retinal capillary perfusion in the superficial capillary plexuses, middle capillary plexuses, and deep capillary plexuses, as well as integrity of the photoreceptor lines on OCT was compared in areas with DRIL to control areas without DRIL in the same eye. Qualitative assessment of projection-resolved OCT angiography of eyes with DRIL on structural OCT demonstrated significant perfusion deficits compared with adjacent control areas (P < 0.001). Most lesions (85.7%) showed superimposed superficial capillary plexus and/or middle capillary plexus nonperfusion in addition to deep capillary plexus nonflow. Areas of DRIL were significantly associated with photoreceptor disruption (P = 0.035) compared with adjacent DRIL-free areas. We found that DRIL is associated with multilevel retinal capillary nonperfusion, suggesting an important role for ischemia in this OCT phenotype.
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
Dahl, Michael C; Ellingson, Arin M; Mehta, Hitesh P; Huelman, Justin H; Nuckley, David J
2013-02-01
Degenerative disc disease is commonly a multilevel pathology with varying deterioration severity. The use of fusion on multiple levels can significantly affect functionality and has been linked to persistent adjacent disc degeneration. A hybrid approach of fusion and nucleus replacement (NR) has been suggested as a solution for mildly degenerated yet painful levels adjacent to fusion. To compare the biomechanical metrics of different hybrid implant constructs, hypothesizing that an NR+fusion hybrid would be similar to a single-level fusion and perform more naturally compared with a two-level fusion. A cadaveric in vitro repeated-measures study was performed to evaluate a multilevel lumbar NR+fusion hybrid. Eight cadaveric spines (L3-S1) were tested in a Spine Kinetic Simulator (Instron, Norwood, MA, USA). Pure moments of 8 Nm were applied in flexion/extension, lateral bending, and axial rotation as well as compression loading. Specimens were tested intact; fused (using transforaminal lumbar interbody fusion instrumentation with posterior rods) at L5-S1; with a nuclectomy at L4-L5 including fusion at L5-S1; with NR at L4-L5 including fusion at L5-S1; and finally with a two-level fusion spanning L4-S1. Repeated-measures analysis of variance and corrected t tests were used to statistically compare outcomes. The NR+fusion hybrid and single-level fusion exhibited no statistical differences for range of motion (ROM), stiffness, neutral zone, and intradiscal pressure in all loading directions. Compared with two-level fusion, the hybrid affords the construct 41.9% more ROM on average. Two-level fusion stiffness was statistically higher than all other constructs and resulted in significantly lower ROM in flexion, extension, and lateral bending. The hybrid construct produced approximately half of the L3-L4 adjacent-level pressures as the two-level fusion case while generating similar pressures to the single-level fusion case. These data portend more natural functional outcomes and fewer adjacent disc complications for a multilevel NR+fusion hybrid compared with the classical two-level fusion. Copyright © 2013 Elsevier Inc. All rights reserved.
Multilevel Interventions: Study Design and Analysis Issues
Gross, Cary P.; Zaslavsky, Alan M.; Taplin, Stephen H.
2012-01-01
Multilevel interventions, implemented at the individual, physician, clinic, health-care organization, and/or community level, increasingly are proposed and used in the belief that they will lead to more substantial and sustained changes in behaviors related to cancer prevention, detection, and treatment than would single-level interventions. It is important to understand how intervention components are related to patient outcomes and identify barriers to implementation. Designs that permit such assessments are uncommon, however. Thus, an important way of expanding our knowledge about multilevel interventions would be to assess the impact of interventions at different levels on patients as well as the independent and synergistic effects of influences from different levels. It also would be useful to assess the impact of interventions on outcomes at different levels. Multilevel interventions are much more expensive and complicated to implement and evaluate than are single-level interventions. Given how little evidence there is about the value of multilevel interventions, however, it is incumbent upon those arguing for this approach to do multilevel research that explicates the contributions that interventions at different levels make to the desired outcomes. Only then will we know whether multilevel interventions are better than more focused interventions and gain greater insights into the kinds of interventions that can be implemented effectively and efficiently to improve health and health care for individuals with cancer. This chapter reviews designs for assessing multilevel interventions and analytic ways of controlling for potentially confounding variables that can account for the complex structure of multilevel data. PMID:22623596
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper
Kuwabara, J.S.; Leland, H.V.
1986-01-01
Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.
The development of video game enjoyment in a role playing game.
Wirth, Werner; Ryffel, Fabian; von Pape, Thilo; Karnowski, Veronika
2013-04-01
This study examines the development of video game enjoyment over time. The results of a longitudinal study (N=62) show that enjoyment increases over several sessions. Moreover, results of a multilevel regression model indicate a causal link between the dependent variable video game enjoyment and the predictor variables exploratory behavior, spatial presence, competence, suspense and solution, and simulated experiences of life. These findings are important for video game research because they reveal the antecedents of video game enjoyment in a real-world longitudinal setting. Results are discussed in terms of the dynamics of video game enjoyment under real-world conditions.
Time-Dependent Simulations of Turbopump Flows
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan; Chan, William; Williams, Robert
2002-01-01
Unsteady flow simulations for RLV (Reusable Launch Vehicles) 2nd Generation baseline turbopump for one and half impeller rotations have been completed by using a 34.3 Million grid points model. MLP (Multi-Level Parallelism) shared memory parallelism has been implemented in INS3D, and benchmarked. Code optimization for cash based platforms will be completed by the end of September 2001. Moving boundary capability is obtained by using DCF module. Scripting capability from CAD (computer aided design) geometry to solution has been developed. Data compression is applied to reduce data size in post processing. Fluid/Structure coupling has been initiated.
Systematic, Multimethod Assessment of Adaptations Across Four Diverse Health Systems Interventions.
Rabin, Borsika A; McCreight, Marina; Battaglia, Catherine; Ayele, Roman; Burke, Robert E; Hess, Paul L; Frank, Joseph W; Glasgow, Russell E
2018-01-01
Many health outcomes and implementation science studies have demonstrated the importance of tailoring evidence-based care interventions to local context to improve fit. By adapting to local culture, history, resources, characteristics, and priorities, interventions are more likely to lead to improved outcomes. However, it is unclear how best to adapt evidence-based programs and promising innovations. There are few guides or examples of how to best categorize or assess health-care adaptations, and even fewer that are brief and practical for use by non-researchers. This study describes the importance and potential of assessing adaptations before, during, and after the implementation of health systems interventions. We present a promising multilevel and multimethod approach developed and being applied across four different health systems interventions. Finally, we discuss implications and opportunities for future research. The four case studies are diverse in the conditions addressed, interventions, and implementation strategies. They include two nurse coordinator-based transition of care interventions, a data and training-driven multimodal pain management project, and a cardiovascular patient-reported outcomes project, all of which are using audit and feedback. We used the same modified adaptation framework to document changes made to the interventions and implementation strategies. To create the modified framework, we started with the adaptation and modification model developed by Stirman and colleagues and expanded it by adding concepts from the RE-AIM framework. Our assessments address the intuitive domains of Who, How, When, What, and Why to classify and organize adaptations. For each case study, we discuss how the modified framework was operationalized, the multiple methods used to collect data, results to date and approaches utilized for data analysis. These methods include a real-time tracking system and structured interviews at key times during the intervention. We provide descriptive data on the types and categories of adaptations made and discuss lessons learned. The multimethod approaches demonstrate utility across diverse health systems interventions. The modified adaptations model adequately captures adaptations across the various projects and content areas. We recommend systematic documentation of adaptations in future clinical and public health research and have made our assessment materials publicly available.
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Grid adaption for hypersonic flow
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.; Smith, Robert E.
1987-01-01
The methods of grid adaption are reviewed and a method is developed with the capability of adaption to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method has been formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose body. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
TRACING THE DEVELOPMENT OF TYPEWRITING SKILLS IN AN ADAPTIVE E-LEARNING ENVIRONMENT.
van den Bergh, Mattis; Hofman, Abe D; Schmittmann, Verena D; van der Maas, Han L J
2015-12-01
Typewriting studies which compare novice and expert typists have suggested that highly trained typing skills involve cognitive process with an inner and outer loop, which regulate keystrokes and words, respectively. The present study investigates these loops longitudinally, using multi-level modeling of 1,091,707 keystroke latencies from 62 children (M age=12.6 yr.) following an online typing course. Using finger movement repetition as indicator of the inner loop and words typed as indicator of the outer loop, practicing keystroke latencies resulted in different developmental curves for each loop. Moreover, based on plateaus in the developmental curves, the inner loop seemed to require less practice to develop than the outer loop.
How We Design Feasibility Studies
Bowen, Deborah J.; Kreuter, Matthew; Spring, Bonnie; Cofta-Woerpel, Ludmila; Linnan, Laura; Weiner, Diane; Bakken, Suzanne; Kaplan, Cecilia Patrick; Squiers, Linda; Fabrizio, Cecilia; Fernandez, Maria
2010-01-01
Public health is moving toward the goal of implementing evidence-based interventions. To accomplish this, there is a need to select, adapt, and evaluate intervention studies. Such selection relies, in part, on making judgments about the feasibility of possible interventions and determining whether comprehensive and multilevel evaluations are justified. There exist few published standards and guides to aid these judgments. This article describes the diverse types of feasibility studies conducted in the field of cancer prevention, using a group of recently funded grants from the National Cancer Institute. The grants were submitted in response to a request for applications proposing research to identify feasible interventions for increasing the utilization of the Cancer Information Service among underserved populations. PMID:19362699
Volling, Brenda L
2005-12-01
The birth of a baby sibling is a normative life event for many children. Few studies address this important transition period and changes in the older sibling's adjustment and family relationships following the sibling's birth. The present article presents a developmental ecological systems model for studying changes in family life and the older child's adjustment following the birth of a baby sibling. Simultaneous changes occurring in the family and how these changes are interrelated over time to predict patterns of adaptation after the transition to siblinghood are underscored. Recommendations for designing longitudinal studies that take advantage of recent developments in multilevel modeling are also discussed. Copyright 2006 APA, all rights reserved).
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is acceptable since it makes possible an overall and local error reduction through grid redistribution. SAGE includes the ability to modify the adaption techniques in boundary regions, which substantially improves the flexibility of the adaptive scheme. The vectorial approach used in the analysis also provides flexibility. The user has complete choice of adaption direction and order of sequential adaptions without concern for the computational data structure. Multiple passes are available with no restraint on stepping directions; for each adaptive pass the user can choose a completely new set of adaptive parameters. This facility, combined with the capability of edge boundary control, enables the code to individually adapt multi-dimensional multiple grids. Zonal grids can be adapted while maintaining continuity along the common boundaries. For patched grids, the multiple-pass capability enables complete adaption. SAGE is written in FORTRAN 77 and is intended to be machine independent; however, it requires a FORTRAN compiler which supports NAMELIST input. It has been successfully implemented on Sun series computers, SGI IRIS's, DEC MicroVAX computers, HP series computers, the Cray YMP, and IBM PC compatibles. Source code is provided, but no sample input and output files are provided. The code reads three datafiles: one that contains the initial grid coordinates (x,y,z), one that contains corresponding flow-field variables, and one that contains the user control parameters. It is assumed that the first two datasets are formatted as defined in the plotting software package PLOT3D. Several machine versions of PLOT3D are available from COSMIC. The amount of main memory is dependent on the size of the matrix. The standard distribution medium for SAGE is a 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. SAGE was developed in 1989, first released as a 2D version in 1991 and updated to 3D in 1993.
Kizub, D; Ghali, I; Sabouni, R; Bourkadi, J E; Bennani, K; El Aouad, R; Dooley, K E
2012-09-01
In Morocco, tuberculosis (TB) treatment default is increasing in some urban areas. To provide a detailed description of factors that contribute to patient default and solutions from the point of view of health care professionals who participate in TB care. In-depth interviews were conducted with 62 physicians and nurses at nine regional public pulmonary clinics and local health clinics. Participants had a median of 24 years of experience in health care. Treatment default was seen as a result of multilevel factors related to the patient (lack of means, being a migrant worker, distance to treatment site, poor understanding of treatment, drug use, mental illness), medical team (high patient load, low motivation, lack of resources for tracking defaulters), treatment organization (poor communication between treatment sites, no systematic strategy for patient education or tracking, incomplete record keeping), and health care system and society. Tailored recommendations for low- and higher-cost interventions are provided. Interventions to enhance TB treatment completion should take into account the local context and multilevel factors that contribute to default. Qualitative studies involving health care workers directly involved in TB care can be powerful tools to identify contributing factors and define strategies to help reduce treatment default.
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Rectangular QPSK for generation of optical eight-ary phase-shift keying.
Lu, Guo-Wei; Sakamoto, Takahide; Kawanishi, Tetsuya
2011-09-12
Quadrature phase-shift keying (QPSK) is usually generated using an in-phase/quadrature (IQ) modulator in a balanced driving-condition, showing a square-shape constellation in complex plane. This conventional QPSK is referred to as square QPSK (S-QPSK) in this paper. On the other hand, when an IQ modulator is driven in an un-balanced manner with different amplitudes in in-phase (I) and quadrature (Q) branches, a rectangular QPSK (R-QPSK) could be synthesized. The concept of R-QPSK is proposed for the first time and applied to optical eight-ary phase-shift keying (8PSK) transmitter. By cascading an S-QPSK and an R-QPSK, an optical 8PSK could be synthesized. The transmitter configuration is based on two cascaded IQ modulators, which also could be used to generate other advanced multi-level formats like quadrature amplitude modulation (QAM) when different driving and bias conditions are applied. Therefore, the proposed transmitter structure has potential to be deployed as a versatile transmitter for synthesis of several different multi-level modulation formats for the future dynamic optical networks. A 30-Gb/s optical 8PSK is experimentally demonstrated using the proposed solution.
A multilevel Lab on chip platform for DNA analysis.
Marasso, Simone Luigi; Giuri, Eros; Canavese, Giancarlo; Castagna, Riccardo; Quaglio, Marzia; Ferrante, Ivan; Perrone, Denis; Cocuzza, Matteo
2011-02-01
Lab-on-chips (LOCs) are critical systems that have been introduced to speed up and reduce the cost of traditional, laborious and extensive analyses in biological and biomedical fields. These ambitious and challenging issues ask for multi-disciplinary competences that range from engineering to biology. Starting from the aim to integrate microarray technology and microfluidic devices, a complex multilevel analysis platform has been designed, fabricated and tested (All rights reserved-IT Patent number TO2009A000915). This LOC successfully manages to interface microfluidic channels with standard DNA microarray glass slides, in order to implement a complete biological protocol. Typical Micro Electro Mechanical Systems (MEMS) materials and process technologies were employed. A silicon/glass microfluidic chip and a Polydimethylsiloxane (PDMS) reaction chamber were fabricated and interfaced with a standard microarray glass slide. In order to have a high disposable system all micro-elements were passive and an external apparatus provided fluidic driving and thermal control. The major microfluidic and handling problems were investigated and innovative solutions were found. Finally, an entirely automated DNA hybridization protocol was successfully tested with a significant reduction in analysis time and reagent consumption with respect to a conventional protocol.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo, Wurigen; Shashkov, Mikhail
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Yang, S; Wang, D
2000-01-01
This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.
Variable-speed wind power system with improved energy capture via multilevel conversion
Erickson, Robert W.; Al-Naseem, Osama A.; Fingersh, Lee Jay
2005-05-31
A system and method for efficiently capturing electrical energy from a variable-speed generator are disclosed. The system includes a matrix converter using full-bridge, multilevel switch cells, in which semiconductor devices are clamped to a known constant DC voltage of a capacitor. The multilevel matrix converter is capable of generating multilevel voltage wave waveform of arbitrary magnitude and frequencies. The matrix converter can be controlled by using space vector modulation.
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Everton, Eric L.
1990-01-01
An interactive grid adaption method is developed, discussed and applied to the unsteady flow about an oscillating airfoil. The user is allowed to have direct interaction with the adaption of the grid as well as the solution procedure. Grid points are allowed to adapt simultaneously to several variables. In addition to the theory and results, the hardware and software requirements are discussed.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
Tanaka, Gouhei; Aihara, Kazuyuki
2009-09-01
A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.
Rodriguez-Ward, Dawn; Larson, Anne M; Ruesta, Harold Gordillo
2018-01-03
This study examines the role multilevel governance plays in the adoption of sustainable landscape management initiatives in emerging arrangements aimed at reducing emissions from deforestation and forest degradation (REDD+). It sheds light on the challenges these multiple layers of actors and interests encounter around such alternatives in a subnational jurisdiction. Through transcript analysis of 93 interviews with institutional actors in the region of Madre de Dios, Peru, particularly with regard to five sites of land-use change, we identified the multiple actors who are included and excluded in the decision-making process and uncovered their complex interactions in forest and landscape governance and REDD+ arrangements. Madre de Dios is a useful case for studying complex land-use dynamics, as it is home to multiple natural resources, a large mix of actors and interests, and a regional government that has recently experienced the reverberations of decentralization. Findings indicate that multiple actors shaped REDD+ to some extent, but REDD+ and its advocates were unable to shape land-use dynamics or landscape governance, at least in the short term. In the absence of strong and effective regional regulation for sustainable land use alternatives and the high value of gold on the international market, illegal gold mining proved to be a more profitable land-use choice. Although REDD+ created a new space for multilevel actor interaction and communication and new alliances to emerge, the study questions the prevailing REDD+ discourse suggesting that better coordination and cooperation will lead to integrated landscape solutions. For REDD+ to be able to play a role in integrated landscape governance, greater attention needs to be paid to grassroots actors, power and authority over territory and underlying interests and incentives for land-use change.
Algebraic dynamic multilevel method for compositional flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Cusini, Matteo; Fryer, Barnaby; van Kruijsdijk, Cor; Hajibeygi, Hadi
2018-02-01
This paper presents the algebraic dynamic multilevel method (ADM) for compositional flow in three dimensional heterogeneous porous media in presence of capillary and gravitational effects. As a significant advancement compared to the ADM for immiscible flows (Cusini et al., 2016) [33], here, mass conservation equations are solved along with k-value based thermodynamic equilibrium equations using a fully-implicit (FIM) coupling strategy. Two different fine-scale compositional formulations are considered: (1) the natural variables and (2) the overall-compositions formulation. At each Newton's iteration the fine-scale FIM Jacobian system is mapped to a dynamically defined (in space and time) multilevel nested grid. The appropriate grid resolution is chosen based on the contrast of user-defined fluid properties and on the presence of specific features (e.g., well source terms). Consistent mapping between different resolutions is performed by the means of sequences of restriction and prolongation operators. While finite-volume restriction operators are employed to ensure mass conservation at all resolutions, various prolongation operators are considered. In particular, different interpolation strategies can be used for the different primary variables, and multiscale basis functions are chosen as pressure interpolators so that fine scale heterogeneities are accurately accounted for across different resolutions. Several numerical experiments are conducted to analyse the accuracy, efficiency and robustness of the method for both 2D and 3D domains. Results show that ADM provides accurate solutions by employing only a fraction of the number of grid-cells employed in fine-scale simulations. As such, it presents a promising approach for large-scale simulations of multiphase flow in heterogeneous reservoirs with complex non-linear fluid physics.
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Kukafka, Rita; Johnson, Stephen B; Linfante, Allison; Allegrante, John P
2003-06-01
Many interventions to improve the success of information technology (IT) implementations are grounded in behavioral science, using theories, and models to identify conditions and determinants of successful use. However, each model in the IT literature has evolved to address specific theoretical problems of particular disciplinary concerns, and each model has been tested and has evolved using, in most cases, a more or less restricted set of IT implementation procedures. Functionally, this limits the perspective for taking into account the multiple factors at the individual, group, and organizational levels that influence use behavior. While a rich body of literature has emerged, employing prominent models such as the Technology Adoption Model, Social-Cognitive Theory, and Diffusion of Innovation Theory, the complexity of defining a suitable multi-level intervention has largely been overlooked. A gap exists between the implementation of IT and the integration of theories and models that can be utilized to develop multi-level approaches to identify factors that impede usage behavior. We present a novel framework that is intended to guide synthesis of more than one theoretical perspective for the purpose of planning multi-level interventions to enhance IT use. This integrative framework is adapted from PRECEDE/PROCEDE, a conceptual framework used by health planners in hundreds of published studies to direct interventions that account for the multiple determinants of behavior. Since we claim that the literature on IT use behavior does not now include a multi-level approach, we undertook a systematic literature analysis to confirm this assertion. Our framework facilitated organizing this literature synthesis and our analysis was aimed at determining if the IT implementation approaches in the published literature were characterized by an approach that considered at least two levels of IT usage determinants. We found that while 61% of studies mentioned or referred to theory, none considered two or more levels. In other words, although the researchers employ behavioral theory, they omit two fundamental propositions: (1) IT usage is influenced by multiple factors and (2) interventions must be multi-dimensional. Our literature synthesis may provide additional insight into the reason for high failure rates associated with underutilized systems, and underscores the need to move beyond the current dominant approach that employs a single model to guide IT implementation plans that aim to address factors associated with IT acceptance and subsequent positive use behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less
Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.
2013-01-01
Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
NASA Astrophysics Data System (ADS)
Garner, G. G.; Keller, K.
2017-12-01
Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Adaptive Osher-type scheme for the Euler equations with highly nonlinear equations of state
NASA Astrophysics Data System (ADS)
Lee, Bok Jik; Toro, Eleuterio F.; Castro, Cristóbal E.; Nikiforakis, Nikolaos
2013-08-01
For the numerical simulation of detonation of condensed phase explosives, a complex equation of state (EOS), such as the Jones-Wilkins-Lee (JWL) EOS or the Cochran-Chan (C-C) EOS, are widely used. However, when a conservative scheme is used for solving the Euler equations with such equations of state, a spurious solution across the contact discontinuity, a well known phenomenon in multi-fluid systems, arises even for single materials. In this work, we develop a generalised Osher-type scheme in an adaptive primitive-conservative framework to overcome the aforementioned difficulties. Resulting numerical solutions are compared with the exact solutions and with the numerical solutions from the Godunov method in conjunction with the exact Riemann solver for the Euler equations with Mie-Grüneisen form of equations of state, such as the JWL and the C-C equations of state. The adaptive scheme is extended to second order and its empirical convergence rates are presented, verifying second order accuracy for smooth solutions. Through a suite of several tests problems in one and two space dimensions we illustrate the failure of conservative schemes and the capability of the methods of this paper to overcome the difficulties.
2014-01-01
Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the signs of parameters and the significance levels between the two models in this case study. Conclusions The integrated approach proposed in this paper is a useful tool for understanding the geographical distribution of self-rated health status within a multilevel framework. In future research, it would be useful to apply the spatially filtered multilevel model to other datasets in order to clarify the differences between the two models. It is anticipated that this integrated method will also out-perform conventional models when it is used in other contexts. PMID:24571639
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Closed-Cycle Nutrient Supply For Hydroponics
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.
1991-01-01
Hydroponic system controls composition and feed rate of nutrient solution and recovers and recycles excess solution. Uses air pressure on bladders to transfer aqueous nutrient solution. Measures and adjusts composition of solution before it goes to hydroponic chamber. Eventually returns excess solution to one of tanks. Designed to operate in microgravity, also adaptable to hydroponic plant-growing systems on Earth.
Simulator for multilevel optimization research
NASA Technical Reports Server (NTRS)
Padula, S. L.; Young, K. C.
1986-01-01
A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.
Adaptive neuro-heuristic hybrid model for fruit peel defects detection.
Woźniak, Marcin; Połap, Dawid
2018-02-01
Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Carayon, Pascale; Hancock, Peter; Leveson, Nancy; Noy, Ian; Sznelwar, Laerte; van Hootegem, Geert
2015-01-01
Traditional efforts to deal with the enormous problem of workplace safety have proved insufficient, as they have tended to neglect the broader sociotechnical environment that surrounds workers. Here, we advocate a sociotechnical systems approach that describes the complex multi-level system factors that contribute to workplace safety. From the literature on sociotechnical systems, complex systems and safety, we develop a sociotechnical model of workplace safety with concentric layers of the work system, socio-organisational context and the external environment. The future challenges that are identified through the model are highlighted. Practitioner Summary: Understanding the environmental, organisational and work system factors that contribute to workplace safety will help to develop more effective and integrated solutions to deal with persistent workplace safety problems. Solutions to improve workplace safety need to recognise the broad sociotechnical system and the respective interactions between the system elements and levels. PMID:25831959
Carayon, Pascale; Hancock, Peter; Leveson, Nancy; Noy, Ian; Sznelwar, Laerte; van Hootegem, Geert
2015-01-01
Traditional efforts to deal with the enormous problem of workplace safety have proved insufficient, as they have tended to neglect the broader sociotechnical environment that surrounds workers. Here, we advocate a sociotechnical systems approach that describes the complex multi-level system factors that contribute to workplace safety. From the literature on sociotechnical systems, complex systems and safety, we develop a sociotechnical model of workplace safety with concentric layers of the work system, socio-organisational context and the external environment. The future challenges that are identified through the model are highlighted. Understanding the environmental, organisational and work system factors that contribute to workplace safety will help to develop more effective and integrated solutions to deal with persistent workplace safety problems. Solutions to improve workplace safety need to recognise the broad sociotechnical system and the respective interactions between the system elements and levels.
Cho, Yeoungjee; Badve, Sunil V.; Hawley, Carmel M.; McDonald, Stephen P.; Brown, Fiona G.; Boudville, Neil; Bannister, Kym M.; Clayton, Philip A.
2013-01-01
Summary Background and objectives The effect of biocompatible peritoneal dialysis (PD) solutions on PD-related peritonitis is unclear. This study sought to evaluate the relationship between use of biocompatible solutions and the probability of occurrence or clinical outcomes of peritonitis. Design, setting, participants, & measurements The study included all incident Australian patients receiving PD between January 1, 2007, and December 31, 2010, using Australia and New Zealand Dialysis and Transplant Registry data. All multicompartment PD solutions of neutral pH were categorized as biocompatible solutions. The independent predictors of peritonitis and the use of biocompatible solutions were determined by multivariable, multilevel mixed-effects Poisson and logistic regression analysis, respectively. Sensitivity analyses, including propensity score matching, were performed. Results Use of biocompatible solutions gradually declined (from 7.5% in 2007 to 4.2% in 2010), with preferential use among smaller units and among younger patients without diabetes mellitus. Treatment with biocompatible solution was associated with significantly greater overall rate of peritonitis (0.67 versus 0.47 episode per patient-year; incidence rate ratio, 1.49; 95% confidence interval [CI], 1.19 to 1.89) and with shorter time to first peritonitis (hazard ratio [HR], 1.48; 95% CI, 1.17 to 1.87), a finding replicated in propensity score–matched cohorts (HR, 1.36; 95% CI, 1.09 to 1.71). Conclusions In an observational registry study, use of biocompatible PD solutions was associated with higher overall peritonitis rates and shorter time to first peritonitis. Further randomized studies adequately powered for a primary peritonitis outcome are warranted. PMID:23949232
NASA Astrophysics Data System (ADS)
Ryżyński, Grzegorz; Nałęcz, Tomasz
2016-10-01
The efficient geological data management in Poland is necessary to support multilevel decision processes for government and local authorities in case of spatial planning, mineral resources and groundwater supply and the rational use of subsurface. Vast amount of geological information gathered in the digital archives and databases of Polish Geological Survey (PGS) is a basic resource for multi-scale national subsurface management. Data integration is the key factor to allow development of GIS and web tools for decision makers, however the main barrier for efficient geological information management is the heterogeneity of data in the resources of the Polish Geological Survey. Engineering-geological database is the first PGS thematic domain applied in the whole data integration plan. The solutions developed within this area will facilitate creation of procedures and standards for multilevel data management in PGS. Twenty years of experience in delivering digital engineering-geological mapping in 1:10 000 scale and archival geotechnical reports acquisition and digitisation allowed gathering of more than 300 thousands engineering-geological boreholes database as well as set of 10 thematic spatial layers (including foundation conditions map, depth to the first groundwater level, bedrock level, geohazards). Historically, the desktop approach was the source form of the geological-engineering data storage, resulting in multiple non-correlated interbase datasets. The need for creation of domain data model emerged and an object-oriented modelling (UML) scheme has been developed. The aim of the aforementioned development was to merge all datasets in one centralised Oracle server and prepare the unified spatial data structure for efficient web presentation and applications development. The presented approach will be the milestone toward creation of the Polish national standard for engineering-geological information management. The paper presents the approach and methodology of data unification, thematic vocabularies harmonisation, assumptions and results of data modelling as well as process of the integration of domain model with enterprise architecture implemented in PGS. Currently, there is no geological data standard in Poland. Lack of guidelines for borehole and spatial data management results in an increasing data dispersion as well as in growing barrier for multilevel data management and implementation of efficient decision support tools. Building the national geological data standard makes geotechnical information accessible to multiple institutions, universities, administration and research organisations and gather their data in the same, unified digital form according to the presented data model. Such approach is compliant with current digital trends and the idea of Spatial Data Infrastructure. Efficient geological data management is essential to support the sustainable development and the economic growth, as they allow implementation of geological information to assist the idea of Smart Cites, deliver information for Building Information Management (BIM) and support modern spatial planning. The engineering-geological domain data model presented in the paper is a scalable solution. Future implementation of developed procedures on other domains of PGS geological data is possible.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
Miller, Christopher A; Parasuraman, Raja
2007-02-01
To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.
Knowledge acquisition for case-based reasoning systems
NASA Technical Reports Server (NTRS)
Riesbeck, Christopher K.
1988-01-01
Case-based reasoning (CBR) is a simple idea: solve new problems by adapting old solutions to similar problems. The CBR approach offers several potential advantages over rule-based reasoning: rules are not combined blindly in a search for solutions, solutions can be explained in terms of concrete examples, and performance can improve automatically as new problems are solved and added to the case library. Moving CBR for the university research environment to the real world requires smooth interfaces for getting knowledge from experts. Described are the basic elements of an interface for acquiring three basic bodies of knowledge that any case-based reasoner requires: the case library of problems and their solutions, the analysis rules that flesh out input problem specifications so that relevant cases can be retrieved, and the adaptation rules that adjust old solutions to fit new problems.
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
Nagler, Rebekah H.; Bigman, Cabral A.; Ramanadhan, Shoba; Ramamurthi, Divya; Viswanath, K.
2016-01-01
Background Americans remain under-informed about cancer and other health disparities and the social determinants of health (SDH). The news media may be contributing to this knowledge deficit, whether by discussing these issues narrowly or ignoring them altogether. Because local media are particularly important in influencing public opinion and support for public policies, this study examines the prevalence and framing of disparities/SDH in local mainstream and ethnic print news. Methods We conducted a multi-method content analysis of local mainstream (English-language) and ethnic (Spanish-language) print news in two lower-income cities in New England with substantial racial/ethnic minority populations. After establishing inter-coder reliability (kappa=0.63–0.88), coders reviewed the primary English- and Spanish-language newspaper in each city, identifying both disparities and non-disparities health stories published between February 2010 and January 2011. Results Local print news coverage of cancer and other health disparities was rare. Of 650 health stories published across four newspapers during the one-year study period, only 21 (3.2%) discussed disparities/SDH. Although some stories identified causes of and solutions for disparities, these were often framed in individual (e.g., poor dietary habits) rather than social contextual terms (e.g., lack of food availability/affordability). Cancer and other health stories routinely missed opportunities to discuss disparities/SDH. Conclusion Local mainstream and ethnic media may be ideal targets for multilevel interventions designed to address cancer and other health inequalities. Impact By increasing media attention to and framing of health disparities, we may observe important downstream effects on public opinion and support for structural solutions to disparities, particularly at the local level. PMID:27196094
Shieh, Bernard; Sabra, Karim G; Degertekin, F Levent
2016-11-01
A boundary element model provides great flexibility for the simulation of membrane-type micromachined ultrasonic transducers (MUTs) in terms of membrane shape, actuating mechanism, and array layout. Acoustic crosstalk is accounted for through a mutual impedance matrix that captures the primary crosstalk mechanism of dispersive-guided modes generated at the fluid-solid interface. However, finding the solution to the fully populated boundary element matrix equation using standard techniques requires computation time and memory usage that scales by the cube and by the square of the number of nodes, respectively, limiting simulation to a small number of membranes. We implement a solver with improved speed and efficiency through the application of a multilevel fast multipole algorithm (FMA). By approximating the fields of collections of nodes using multipole expansions of the free-space Green's function, an FMA solver can enable the simulation of hundreds of thousands of nodes while incurring an approximation error that is controllable. Convergence is drastically improved using a problem-specific block-diagonal preconditioner. We demonstrate the solver's capabilities by simulating a 32-element 7-MHz 1-D capacitive MUT (CMUT) phased array with 2880 membranes. The array is simulated using 233280 nodes for a very wide frequency band up to 50 MHz. For a simulation with 15210 nodes, the FMA solver performed ten times faster and used 32 times less memory than a standard solver based on LU decomposition. We investigate the effects of mesh density and phasing on the predicted array response and find that it is necessary to use about seven nodes over the width of the membrane to observe convergence of the solution-even below the first membrane resonance frequency-due to the influence of higher order membrane modes.
NASA Astrophysics Data System (ADS)
Abd-El-Barr, Mostafa
2010-12-01
The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.
Gierczak, R F D; Devlin, J F; Rudolph, D L
2006-01-05
Elevated nitrate concentrations within a municipal water supply aquifer led to pilot testing of a field-scale, in situ denitrification technology based on carbon substrate injections. In advance of the pilot test, detailed characterization of the site was undertaken. The aquifer consisted of complex, discontinuous and interstratified silt, sand and gravel units, similar to other well studied aquifers of glaciofluvial origin, 15-40 m deep. Laboratory and field tests, including a conservative tracer test, a pumping test, a borehole flowmeter test, grain-size analysis of drill cuttings and core material, and permeameter testing performed on core samples, were performed on the most productive depth range (27-40 m), and the results were compared. The velocity profiles derived from the tracer tests served as the basis for comparison with other methods. The spatial variation in K, based on grain-size analysis, using the Hazen method, were poorly correlated with the breakthrough data. Trends in relative hydraulic conductivity (K/K(avg)) from permeameter testing compared somewhat better. However, the trends in transient drawdown with depth, measured in multilevel sampling points, corresponded particularly well with those of solute mass flux. Estimates of absolute K, based on standard pumping test analysis of the multilevel drawdown data, were inversely correlated with the tracer test data. The inverse nature of the correlation was attributed to assumptions in the transient drawdown packages that were inconsistent with the variable diffusivities encountered at the scale of the measurements. Collectively, the data showed that despite a relatively low variability in K within the aquifer under study (within a factor of 3), water and solute mass fluxes were concentrated in discrete intervals that could be targeted for later bioremediation.
NASA Astrophysics Data System (ADS)
Ben Regaya, Chiheb; Farhani, Fethi; Zaafouri, Abderrahmen; Chaari, Abdelkader
2018-02-01
This paper presents a new adaptive Backstepping technique to handle the induction motor (IM) rotor resistance tracking problem. The proposed solution leads to improve the robustness of the control system. Given the presence of static error when estimating the rotor resistance with classical methods, and the sensitivity to the load torque variation at low speed, a new Backstepping observer enhanced with an integral action of the tracking errors is presented, which can be established in two steps. The first one consists to estimate the rotor flux using a Backstepping observer. The second step, defines the adaptation mechanism of the rotor resistance based on the estimated rotor-flux. The asymptotic stability of the observer is proven by Lyapunov theory. To validate the proposed solution, a simulation and experimental benchmarking of a 3 kW induction motor are presented and analyzed. The obtained results show the effectiveness of the proposed solution compared to the model reference adaptive system (MRAS) rotor resistance observer presented in other recent works.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Grid adaption for bluff bodies
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1986-01-01
Methods of grid adaptation are reviewed and a method is developed with the capability of adaptation to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method was formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method and a movie was produced which shows simulataneously the transient behavior of the solution and the grid adaptation. The results are compared with the experimental and other numerical results.
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Multi-level obstruction in obstructive sleep apnoea: prevalence, severity and predictive factors.
Phua, C Q; Yeo, W X; Su, C; Mok, P K H
2017-11-01
To characterise multi-level obstruction in terms of prevalence, obstructive sleep apnoea severity and predictive factors, and to collect epidemiological data on upper airway morphology in obstructive sleep apnoea patients. Retrospective review of 250 obstructive sleep apnoea patients. On clinical examination, 171 patients (68.4 per cent) had multi-level obstruction, 49 (19.6 per cent) had single-level obstruction and 30 (12 per cent) showed no obstruction. Within each category of obstructive sleep apnoea severity, multi-level obstruction was more prevalent. Multi-level obstruction was associated with severe obstructive sleep apnoea (more than 30 events per hour) (p = 0.001). Obstructive sleep apnoea severity increased with the number of obstruction sites (correlation coefficient = 0.303, p < 0.001). Multi-level obstruction was more likely in younger (p = 0.042), male (p = 0.045) patients, with high body mass index (more than 30 kg/m2) (p < 0.001). Palatal (p = 0.004), tongue (p = 0.026) and lateral pharyngeal wall obstructions (p = 0.006) were associated with severe obstructive sleep apnoea. Multi-level obstruction is more prevalent in obstructive sleep apnoea and is associated with increased severity. Obstruction at certain anatomical levels contributes more towards obstructive sleep apnoea severity.
Dynamic mesh adaption for triangular and tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.
ERIC Educational Resources Information Center
Connections: A Journal of Adult Literacy, 1997
1997-01-01
This issue contains 12 articles written by teachers who have investigated various aspects of the multilevel question in their own classrooms. "The Multilevel Question" (Lenore Balliro) provides an introduction. "Deconstructing the Great Wall of Print" (Richard Goldberg) investigates reading strategies that allow students with a wide range of…
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
Misra, Sanchit; Pamnany, Kiran; Aluru, Srinivas
2015-01-01
Construction of whole-genome networks from large-scale gene expression data is an important problem in systems biology. While several techniques have been developed, most cannot handle network reconstruction at the whole-genome scale, and the few that can, require large clusters. In this paper, we present a solution on the Intel Xeon Phi coprocessor, taking advantage of its multi-level parallelism including many x86-based cores, multiple threads per core, and vector processing units. We also present a solution on the Intel® Xeon® processor. Our solution is based on TINGe, a fast parallel network reconstruction technique that uses mutual information and permutation testing for assessing statistical significance. We demonstrate the first ever inference of a plant whole genome regulatory network on a single chip by constructing a 15,575 gene network of the plant Arabidopsis thaliana from 3,137 microarray experiments in only 22 minutes. In addition, our optimization for parallelizing mutual information computation on the Intel Xeon Phi coprocessor holds out lessons that are applicable to other domains.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
A closed-loop neurobotic system for fine touch sensing
NASA Astrophysics Data System (ADS)
Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.
2013-08-01
Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.
Hirth, Melissa J; Howell, Julianne W; O'Brien, Lisa
Case report. Injuries to adjacent fingers with differing extensor tendon (ET) zones and/or sagittal band pose a challenge to therapists as no treatment guidelines exist. This report highlights how the relative motion flexion/extension (RMF/RME) concepts were combined into one orthosis to manage a zone IV ET repair (RME) and a zone III central slip repair (RMF) in adjacent fingers (Case 1); and how a single RME orthosis was adapted to limit proximal interphalangeal joint motion to manage multi-level ET zone III-IV injuries and a sagittal band repair in adjacent fingers (case 2). Adapted relative motion orthoses allowed early active motion and graded exercises based on clinical reasoning and evidence. Outcomes were standard TAM% and Miller's criteria. 'Excellent' and 'good' outcomes were achieved by twelve weeks post surgery. Both cases returned to unrestricted work at 6 and 7 weeks. Neither reported functional deficits at discharge. Outcomes in 2 cases involving multiple digit injuries exceeded those previously reported for ET zone III-IV repairs. Relative motion orthoses can be adapted and applied to multi-finger injuries, eliminating the need for multiple, bulky or functionally-limiting orthoses. 4. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
2017-01-01
The type and variety of learning strategies used by individuals to acquire behaviours in the wild are poorly understood, despite the presence of behavioural traditions in diverse taxa. Social learning strategies such as conformity can be broadly adaptive, but may also retard the spread of adaptive innovations. Strategies like pay-off-biased learning, by contrast, are effective at diffusing new behaviour but may perform poorly when adaptive behaviour is common. We present a field experiment in a wild primate, Cebus capucinus, that introduced a novel food item and documented the innovation and diffusion of successful extraction techniques. We develop a multilevel, Bayesian statistical analysis that allows us to quantify individual-level evidence for different social and individual learning strategies. We find that pay-off-biased and age-biased social learning are primarily responsible for the diffusion of new techniques. We find no evidence of conformity; instead rare techniques receive slightly increased attention. We also find substantial and important variation in individual learning strategies that is patterned by age, with younger individuals being more influenced by both social information and their own individual experience. The aggregate cultural dynamics in turn depend upon the variation in learning strategies and the age structure of the wild population. PMID:28592681
Organic compatible solutes of halotolerant and halophilic microorganisms
Roberts, Mary F
2005-01-01
Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595
A Multilevel Assessment of Differential Item Functioning.
ERIC Educational Resources Information Center
Shen, Linjun
A multilevel approach was proposed for the assessment of differential item functioning and compared with the traditional logistic regression approach. Data from the Comprehensive Osteopathic Medical Licensing Examination for 2,300 freshman osteopathic medical students were analyzed. The multilevel approach used three-level hierarchical generalized…
A Multi-Level Model of Information Seeking in the Clinical Domain
Hung, Peter W.; Johnson, Stephen B.; Kaufman, David R.; Mendonça, Eneida A.
2008-01-01
Objective: Clinicians often have difficulty translating information needs into effective search strategies to find appropriate answers. Information retrieval systems employing an intelligent search agent that generates adaptive search strategies based on human search expertise could be helpful in meeting clinician information needs. A prerequisite for creating such systems is an information seeking model that facilitates the representation of human search expertise. The purpose of developing such a model is to provide guidance to information seeking system development and to shape an empirical research program. Design: The information seeking process was modeled as a complex problem-solving activity. After considering how similarly complex activities had been modeled in other domains, we determined that modeling context-initiated information seeking across multiple problem spaces allows the abstraction of search knowledge into functionally consistent layers. The knowledge layers were identified in the information science literature and validated through our observations of searches performed by health science librarians. Results: A hierarchical multi-level model of context-initiated information seeking is proposed. Each level represents (1) a problem space that is traversed during the online search process, and (2) a distinct layer of knowledge that is required to execute a successful search. Grand strategy determines what information resources will be searched, for what purpose, and in what order. The strategy level represents an overall approach for searching a single resource. Tactics are individual moves made to further a strategy. Operations are mappings of abstract intentions to information resource-specific concrete input. Assessment is the basis of interaction within the strategic hierarchy, influencing the direction of the search. Conclusion: The described multi-level model provides a framework for future research and the foundation for development of an automated information retrieval system that uses an intelligent search agent to bridge clinician information needs and human search expertise. PMID:18006383
Post-stroke balance rehabilitation under multi-level electrotherapy: a conceptual review
Dutta, Anirban; Lahiri, Uttama; Das, Abhijit; Nitsche, Michael A.; Guiraud, David
2014-01-01
Stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to respond to intrinsic or extrinsic stimuli by reorganizing its structure, function, and connections is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. It has been shown that active cortical participation in a closed-loop brain machine interface (BMI) can induce neuroplasticity in cortical networks where the brain acts as a controller, e.g., during a visuomotor task. Here, the motor task can be assisted with neuromuscular electrical stimulation (NMES) where the BMI will act as a real-time decoder. However, the cortical control and induction of neuroplasticity in a closed-loop BMI is also dependent on the state of brain, e.g., visuospatial attention during visuomotor task performance. In fact, spatial neglect is a hidden disability that is a common complication of stroke and is associated with prolonged hospital stays, accidents, falls, safety problems, and chronic functional disability. This hypothesis and theory article presents a multi-level electrotherapy paradigm toward motor rehabilitation in virtual reality that postulates that while the brain acts as a controller in a closed-loop BMI to drive NMES, the state of brain can be can be altered toward improvement of visuomotor task performance with non-invasive brain stimulation (NIBS). This leads to a multi-level electrotherapy paradigm where a virtual reality-based adaptive response technology is proposed for post-stroke balance rehabilitation. In this article, we present a conceptual review of the related experimental findings. PMID:25565937
On the implementation of an accurate and efficient solver for convection-diffusion equations
NASA Astrophysics Data System (ADS)
Wu, Chin-Tien
In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.
Post test review of a single car test of multi-level passenger equipment
DOT National Transportation Integrated Search
2008-04-22
The single car test of multi-level equipment described in : this paper was designed to help evaluate the crashworthiness of : a multi-level car in a controlled collision. The data collected : from this test will be used to refine engineering models. ...
Multilevel Modeling of Social Segregation
ERIC Educational Resources Information Center
Leckie, George; Pillinger, Rebecca; Jones, Kelvyn; Goldstein, Harvey
2012-01-01
The traditional approach to measuring segregation is based upon descriptive, non-model-based indices. A recently proposed alternative is multilevel modeling. The authors further develop the argument for a multilevel modeling approach by first describing and expanding upon its notable advantages, which include an ability to model segregation at a…
Multilevel and Diverse Classrooms
ERIC Educational Resources Information Center
Baurain, Bradley, Ed.; Ha, Phan Le, Ed.
2010-01-01
The benefits and advantages of classroom practices incorporating unity-in-diversity and diversity-in-unity are what "Multilevel and Diverse Classrooms" is all about. Multilevel classrooms--also known as mixed-ability or heterogeneous classrooms--are a fact of life in ESOL programs around the world. These classrooms are often not only…
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.
1995-01-01
This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Functional specialization in regulation and quality control in thermal adaptive evolution.
Yama, Kazuma; Matsumoto, Yuki; Murakami, Yoshie; Seno, Shigeto; Matsuda, Hideo; Gotoh, Kazuyoshi; Motooka, Daisuke; Nakamura, Shota; Ying, Bei-Wen; Yomo, Tetsuya
2015-11-01
Distinctive survival strategies, specialized in regulation and in quality control, were observed in thermal adaptive evolution with a laboratory Escherichia coli strain. The two specialists carried a single mutation either within rpoH or upstream of groESL, which led to the activated global regulation by sigma factor 32 or an increased amount of GroEL/ES chaperonins, respectively. Although both specialists succeeded in thermal adaptation, the common winner of the evolution was the specialist in quality control, that is, the strategy of chaperonin-mediated protein folding. To understand this evolutionary consequence, multilevel analyses of cellular status, for example, transcriptome, protein and growth fitness, were carried out. The specialist in quality control showed less change in transcriptional reorganization responding to temperature increase, which was consistent with the finding of that the two specialists showed the biased expression of molecular chaperones. Such repressed changes in gene expression seemed to be advantageous for long-term sustainability because a specific increase in chaperonins not only facilitated the folding of essential gene products but also saved cost in gene expression compared with the overall transcriptional increase induced by rpoH regulation. Functional specialization offered two strategies for successful thermal adaptation, whereas the evolutionary advantageous was more at the points of cost-saving in gene expression and the essentiality in protein folding. © 2015 The Authors. Genes to Cells published by Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
Aerodynamics of Engine-Airframe Interaction
NASA Technical Reports Server (NTRS)
Caughey, D. A.
1986-01-01
The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.
Multiscale computations with a wavelet-adaptive algorithm
NASA Astrophysics Data System (ADS)
Rastigejev, Yevgenii Anatolyevich
A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.
Multilevel Higher-Order Item Response Theory Models
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
ERIC Educational Resources Information Center
Schölmerich, Vera L. N.; Kawachi, Ichiro
2016-01-01
Scholars and practitioners frequently make recommendations to develop family planning interventions that are "multilevel." Such interventions take explicit account of the role of environments by incorporating multilevel or social-ecological frameworks into their design and implementation. However, research on how interventions have…
Multilevel Modeling: A Review of Methodological Issues and Applications
ERIC Educational Resources Information Center
Dedrick, Robert F.; Ferron, John M.; Hess, Melinda R.; Hogarty, Kristine Y.; Kromrey, Jeffrey D.; Lang, Thomas R.; Niles, John D.; Lee, Reginald S.
2009-01-01
This study analyzed the reporting of multilevel modeling applications of a sample of 99 articles from 13 peer-reviewed journals in education and the social sciences. A checklist, derived from the methodological literature on multilevel modeling and focusing on the issues of model development and specification, data considerations, estimation, and…
Building Path Diagrams for Multilevel Models
ERIC Educational Resources Information Center
Curran, Patrick J.; Bauer, Daniel J.
2007-01-01
Multilevel models have come to play an increasingly important role in many areas of social science research. However, in contrast to other modeling strategies, there is currently no widely used approach for graphically diagramming multilevel models. Ideally, such diagrams would serve two functions: to provide a formal structure for deriving the…
The goal of this study was to evaluate the possible use of the Environmental Relative Moldiness Index (ERMI) to quantify mold contamination in multi-level, office buildings. Settled-dust samples were collected in multi-level, office buildings and the ERMI value for each sample de...
Teaching Multilevel Adult ESL Classes. ERIC Digest.
ERIC Educational Resources Information Center
Shank, Cathy C.; Terrill, Lynda R.
Teachers in multilevel adult English-as-a-Second-Language classes are challenged to use a variety of materials, activities, and techniques to engage the interest of the learners and assist them in their educational goals. This digest recommends ways to choose and organize content for multilevel classes, explains grouping strategies, discusses a…
Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM
ERIC Educational Resources Information Center
Li, Xin; Beretvas, S. Natasha
2013-01-01
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Conducting Multilevel Analyses in Medical Education
ERIC Educational Resources Information Center
Zyphur, Michael J.; Kaplan, Seth A.; Islam, Gazi; Barsky, Adam P.; Franklin, Michael S.
2008-01-01
A significant body of education literature has begun using multilevel statistical models to examine data that reside at multiple levels of analysis. In order to provide a primer for medical education researchers, the current work gives a brief overview of some issues associated with multilevel statistical modeling. To provide an example of this…
Multilevel modelling: Beyond the basic applications.
Wright, Daniel B; London, Kamala
2009-05-01
Over the last 30 years statistical algorithms have been developed to analyse datasets that have a hierarchical/multilevel structure. Particularly within developmental and educational psychology these techniques have become common where the sample has an obvious hierarchical structure, like pupils nested within a classroom. We describe two areas beyond the basic applications of multilevel modelling that are important to psychology: modelling the covariance structure in longitudinal designs and using generalized linear multilevel modelling as an alternative to methods from signal detection theory (SDT). Detailed code for all analyses is described using packages for the freeware R.
Design and Implementation of 13 Levels Multilevel Inverter for Photovoltaic System
NASA Astrophysics Data System (ADS)
Subramani, C.; Dhineshkumar, K.; Palanivel, P.
2018-04-01
This paper approaches the appearing and modernization of S-Type PV based 13- level multilevel inverter with less quantity of switch. The current S-Type Multi level inverter contains more number of switches and voltage sources. Multilevel level inverter is a be understandable among the most gainful power converters for high power application and present day applications with reduced switches. The fundamental good arrangement of the 13-level multilevel inverter is to get ventured voltage from a couple of levels of DC voltages.. The controller gives actual way day and age to switches through driver circuit using PWM methodology. The execution assessment of proposed multilevel inverter is checked using MATLAB/Simulink. This is the outstanding among other techniquem appeared differently in relation to all other existing system
NASA Astrophysics Data System (ADS)
Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin
2017-08-01
In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.
Integrated structure/control law design by multilevel optimization
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Schmidt, David K.
1989-01-01
A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
Madhombiro, Munyaradzi; Dube-Marimbe, Bazondlile; Dube, Michelle; Chibanda, Dixon; Zunza, Moleen; Rusakaniko, Simbarashe; Stewart, David; Seedat, Soraya
2017-01-28
Interventions for alcohol use disorders (AUDs) in HIV infected individuals have been primarily targeted at HIV risk reduction and improved antiretroviral treatment adherence. However, reduction in alcohol use is an important goal. Alcohol use affects other key factors that may influence treatment course and outcome. In this study the authors aim to administer an adapted intervention for AUDs to reduce alcohol use in people living with HIV/AIDS (PLWHA). This study is a cluster randomised controlled trial at 16 HIV care clinics. A motivational interviewing and cognitive behavioural therapy based intervention for AUDs, developed through adaptation and piloted in Zimbabwe, will be administered to PLWHA with AUDs recruited at HIV clinics. The intervention will be administered over 16 sessions at 8 HIV clinics. This intervention will be compared with an equal attention control in the form of the World Health Organization Mental Health Gap Action Programme (WHO mhGAP) guide, adapted for the Zimbabwean context. General function, quality of life, and adherence to highly active antiretroviral treatment (HAART) will be secondary outcomes. Booster sessions will be administered to both groups at 3 and 6 months respectively. The primary outcome measure will be the Alcohol Use Disorder Identification Test (AUDIT) score. The World Health Organisation Disability Assessment Schedule 2.0 (WHODAS 2.0), World Health Organisation Quality of Life (WHOQoL) HIV, viral load, and CD4 counts will be secondary outcome measures. Outcome assessments will be administered at baseline, 3, 6, and 12 months. Moderating factors such as perceived social support, how people cope with difficult situations and post-traumatic exposure and experience will be assessed at baseline. Trained research assistants will recruit participants. The outcome assessors who will be trained in administering the outcome and moderating tools will be blinded to the treatment arms allocated to the participants. However, the principal investigator, participants and intervention staff will be unblinded. Data will be analysed using STATA Version 14. Primary and secondary outcomes will be measured at four time points that is; at baseline, 3, 6, and 12 months respectively. All participants will be included in the analysis of primary and secondary outcome measures. The mean AUDIT scores will be compared between groups using student t-tests. Multilevel logistic regression analysis will be performed for binominal variables and multilevel linear regression for continuous variables. Descriptive statistics will be computed for baseline and follow-up assessments. The study will be the first to address problematic alcohol use in PLWHA in Zimbabwe. It seeks to use local resources in delivering a modified, brief, evidence-based, and culturally contextualised intervention. The study results will determine the effectiveness of adapting psychological interventions for AUDs in HIV infected adults using a task-sharing framework. Pan African Clinical Trial Registry, PACTR201509001211149 . Registered 22 July 2015.
Phase plate technology for laser marking of magnetic discs
Neuman, Bill; Honig, John; Hackel, Lloyd; Dane, C. Brent; Dixit, Shamasundar
1998-01-01
An advanced design for a phase plate enables the distribution of spots in arbitrarily shaped patterns with very high uniformity and with a continuously or near-continuously varying phase pattern. A continuous phase pattern eliminates large phase jumps typically expected in a grating that provides arbitrary shapes. Large phase jumps increase scattered light outside of the desired pattern, reduce efficiency and can make the grating difficult to manufacture. When manufacturing capabilities preclude producing a fully continuous grating, the present design can be easily adapted to minimize manufacturing errors and maintain high efficiencies. This continuous grating is significantly more efficient than previously described Dammann gratings, offers much more flexibility in generating spot patterns and is easier to manufacture and replicate than a multi-level phase grating.
GIS4schools: custom-made GIS-applications for educational use
NASA Astrophysics Data System (ADS)
Demharter, Timo; Michel, Ulrich
2013-10-01
From a didactic point of view the procurement and the application of modern geographical methods and functions become more and more important. Although the integration of GIS in the classroom is repeatedly demanded, inter alia in Baden-Württemberg, Germany, the number of GIS users is small in comparison to other European countries or the USA. Possible reasons for this could, for instance, lie in the lack of GIS and computer knowledge of the teachers themselves and the subsequent extensive training effort in Desktop-GIS [1]. Today you have the technological possibilities to provide the broad public with geoinformation and geotechnology: Web technologies offer access to web-based, mobile and local applications through simple gateways. The objective of the project "GIS4schools" is to generate a service-based infrastructure, which can be operated via mobile clients as well as via Desktop-GIS or a Browser. Due to the easy availability of the services the focus is in particular on students. This circumstance is a novelty through which a differentiated approach to the implementation of GIS in schools is established. Accordingly, the pilot nature of this project becomes apparent as well as its greater importance beyond its actual content especially for the sector of media development at colleges of education. The continuity from Web-GIS to Desktop-GIS is innovative: The goal is to create an adapted multi-level solution which allows both, an easy introduction if desired or a detailed analysis - either to be achieved with a focus especially on students and their cooperation among one another.
GIS4schools: a new approach in GIS education
NASA Astrophysics Data System (ADS)
Demharter, Timo; Michel, Ulrich
2012-10-01
From a didactic point of view the procurement and the application of modern geographical methods and functions become more and more important. Although the integration of GIS in the classroom is repeatedly demanded, inter alia in Baden-Württemberg, Germany, the number of GIS users is small in comparison to other European countries or the USA. Possible reasons for this could, for instance, lie in the lack of GIS and computer knowledge of the teachers themselves and the subsequent extensive training effort in Desktop-GIS (KERSKI 2000, SCHLEICHER 2004). Today you have the technological possibilities to provide the broad public with geoinformation and geotechnology: Web technologies offer access to web-based, mobile and local applications through simple gateways. The objective of the project "GIS4schools" is to generate a service-based infrastructure, which can be operated via mobile clients as well as via Desktop-GIS or a Browser. Due to the easy availability of the services the focus is in particular on students. This circumstance is a novelty through which a differentiated approach to the implementation of GIS in schools is established. Accordingly, the pilot nature of this project becomes apparent as well as its greater importance beyond its actual content especially for the sector of media development at colleges of education. The continuity from Web-GIS to Desktop-GIS is innovative: The goal is to create an adapted multi-level solution which allows both, an easy introduction if desired or a detailed analysis - either to be achieved with a focus especially on students and their cooperation among one another.
Smokowski, Paul R; Guo, Shenyang; Cotter, Katie L; Evans, Caroline B R; Rose, Roderick A
2016-01-01
The current study examined multilevel risk factors and developmental assets on longitudinal trajectories of aggressive behavior in a diverse sample of rural adolescents. Using ecological and social capital theories, we explored the impact of positive and negative proximal processes, social capital, and contextual characteristics (i.e., school and neighborhood) on adolescent aggression. Data came from the Rural Adaptation Project, which is a 5-year longitudinal panel study of more than 4,000 middle and high school students from 40 public schools in two rural, low income counties in North Carolina. A three-level HLM model (N = 4,056 at Wave 1, 4,251 at Wave 2, and 4,256 at Wave 3) was estimated to predict factors affecting the change trajectories of aggression. Results indicated that negative proximal processes in the form of parent-adolescent conflict, friend rejection, peer pressure, delinquent friends, and school hassles were significant predictors of aggression. In addition, social capital in the form of ethnic identity, religious orientation, and school satisfaction served as buffers against aggression. Negative proximal processes were more salient predictors than positive proximal processes. School and neighborhood characteristics had a minimal impact on aggression. Overall, rates of aggression did not change significantly over the 3-year study window. Findings highlight the need to intervene in order to decrease negative interactions in the peer and parent domains. © 2015 Wiley Periodicals, Inc.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a different formulation of the viscous terms are shown to be necessary. A hybrid Cartesian/body-fitted grid generation approach is demonstrated. In addition, a grid-generation procedure based on body-aligned cell cutting coupled with a viscous stensil-construction procedure based on quadratic programming is presented.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-01-01
Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510
Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo
2018-05-10
Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
A new, double-inversion mechanism of the F- + CH3Cl SN2 reaction in aqueous solution.
Liu, Peng; Wang, Dunyou; Xu, Yulong
2016-11-23
Atomic-level, bimolecular nucleophilic substitution reaction mechanisms have been studied mostly in the gas phase, but the gas-phase results cannot be expected to reliably describe condensed-phase chemistry. As a novel, double-inversion mechanism has just been found for the F - + CH 3 Cl S N 2 reaction in the gas phase [Nat. Commun., 2015, 6, 5972], here, using multi-level quantum mechanics methods combined with the molecular mechanics method, we discovered a new, double-inversion mechanism for this reaction in aqueous solution. However, the structures of the stationary points along the reaction path show significant differences from those in the gas phase due to the strong influence of solvent and solute interactions, especially due to the hydrogen bonds formed between the solute and the solvent. More importantly, the relationship between the two double-inversion transition states is not clear in the gas phase, but, here we revealed a novel intermediate complex serving as a "connecting link" between the two transition states of the abstraction-induced inversion and the Walden-inversion mechanisms. A detailed reaction path was constructed to show the atomic-level evolution of this novel double reaction mechanism in aqueous solution. The potentials of mean force were calculated and the obtained Walden-inversion barrier height agrees well with the available experimental value.
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2014-05-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Implementing Culture Change in Nursing Homes: An Adaptive Leadership Framework
Corazzini, Kirsten; Twersky, Jack; White, Heidi K.; Buhr, Gwendolen T.; McConnell, Eleanor S.; Weiner, Madeline; Colón-Emeric, Cathleen S.
2015-01-01
Purpose of the Study: To describe key adaptive challenges and leadership behaviors to implement culture change for person-directed care. Design and Methods: The study design was a qualitative, observational study of nursing home staff perceptions of the implementation of culture change in each of 3 nursing homes. We conducted 7 focus groups of licensed and unlicensed nursing staff, medical care providers, and administrators. Questions explored perceptions of facilitators and barriers to culture change. Using a template organizing style of analysis with immersion/crystallization, themes of barriers and facilitators were coded for adaptive challenges and leadership. Results: Six key themes emerged, including relationships, standards and expectations, motivation and vision, workload, respect of personhood, and physical environment. Within each theme, participants identified barriers that were adaptive challenges and facilitators that were examples of adaptive leadership. Commonly identified challenges were how to provide person-directed care in the context of extant rules or policies or how to develop staff motivated to provide person-directed care. Implications: Implementing culture change requires the recognition of adaptive challenges for which there are no technical solutions, but which require reframing of norms and expectations, and the development of novel and flexible solutions. Managers and administrators seeking to implement person-directed care will need to consider the role of adaptive leadership to address these adaptive challenges. PMID:24451896
Effects of Teacher-Student Relationships on Peer Harassment: A Multilevel Study
ERIC Educational Resources Information Center
Lucas-Molina, Beatriz; Williamson, Ariel A.; Pulido, Rosa; Pérez-Albéniz, Alicia
2015-01-01
Peer harassment is a major social problem affecting children and adolescents internationally. Much research has focused on student-to-student harassment from either an individual or a multilevel perspective. There is a paucity of multilevel research on students' relationships with the classroom teacher. The purpose of this study was to use a…
Coping with Multi-Level Classes Effectively and Creatively.
ERIC Educational Resources Information Center
Strasheim, Lorraine A.
This paper includes a discussion of the problem of multilevel Latin classes, a description of various techniques and perspectives the teacher might use in dealing with these classes, and copies of materials and exercises that have proved useful in multilevel classes. Because the reasons for the existence of such classes are varied, it is suggested…
Multilevel Evaluation Alignment: An Explication of a Four-Step Model
ERIC Educational Resources Information Center
Yang, Huilan; Shen, Jianping; Cao, Honggao; Warfield, Charles
2004-01-01
Using the evaluation work on the W.K. Kellogg Foundation's Unleashing Resources Initiative as an example, in this article we explicate a general four-step model appropriate for multilevel evaluation alignment. We review the relevant literature, argue for the need for evaluation alignment in a multilevel context, explain the four-step model,…
Alternatives to Multilevel Modeling for the Analysis of Clustered Data
ERIC Educational Resources Information Center
Huang, Francis L.
2016-01-01
Multilevel modeling has grown in use over the years as a way to deal with the nonindependent nature of observations found in clustered data. However, other alternatives to multilevel modeling are available that can account for observations nested within clusters, including the use of Taylor series linearization for variance estimation, the design…
Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI
ERIC Educational Resources Information Center
Forer, Barry; Zumbo, Bruno D.
2011-01-01
The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
ERIC Educational Resources Information Center
Upton, Matthew G.; Egan, Toby Marshall
2007-01-01
The established limitations of career development (CD) theory and human resource development (HRD) theory building are addressed by expanding the framing of these issues to multilevel contexts. Multilevel theory building is an approach most effectively aligned with HRD literature and CD and HRD practice realities. An innovative approach multilevel…
A General Multilevel SEM Framework for Assessing Multilevel Mediation
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zyphur, Michael J.; Zhang, Zhen
2010-01-01
Several methods for testing mediation hypotheses with 2-level nested data have been proposed by researchers using a multilevel modeling (MLM) paradigm. However, these MLM approaches do not accommodate mediation pathways with Level-2 outcomes and may produce conflated estimates of between- and within-level components of indirect effects. Moreover,…
Multilevel Motivation and Engagement: Assessing Construct Validity across Students and Schools
ERIC Educational Resources Information Center
Martin, Andrew J.; Malmberg, Lars-Erik; Liem, Gregory Arief D.
2010-01-01
Statistical biases associated with single-level analyses underscore the importance of partitioning variance/covariance matrices into individual and group levels. From a multilevel perspective based on data from 21,579 students in 58 high schools, the present study assesses the multilevel factor structure of motivation and engagement with a…
The Consequences of Ignoring Individuals' Mobility in Multilevel Growth Models: A Monte Carlo Study
ERIC Educational Resources Information Center
Luo, Wen; Kwok, Oi-man
2012-01-01
In longitudinal multilevel studies, especially in educational settings, it is fairly common that participants change their group memberships over time (e.g., students switch to different schools). Participant's mobility changes the multilevel data structure from a purely hierarchical structure with repeated measures nested within individuals and…
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
The Effects of Autonomy and Empowerment on Employee Turnover: Test of a Multilevel Model in Teams
ERIC Educational Resources Information Center
Liu, Dong; Zhang, Shu; Wang, Lei; Lee, Thomas W.
2011-01-01
Extending research on voluntary turnover in the team setting, this study adopts a multilevel self-determination theoretical approach to examine the unique roles of individual and social-contextual motivational precursors, autonomy orientation and autonomy support, in reducing team member voluntary turnover. Analysis of multilevel time-lagged data…
ERIC Educational Resources Information Center
Gochhayat, Jyotiranjan; Giri, Vijai N.; Suar, Damodar
2017-01-01
This study provides a new conceptualization of educational leadership with a multilevel and integrative approach. It examines the impact of multilevel leadership (MLL) on the effectiveness of technical educational institutes through the mediating effects of organizational communication, bases of power and organizational culture. Data were…
Multilevel corporate environmental responsibility.
Karassin, Orr; Bar-Haim, Aviad
2016-12-01
The multilevel empirical study of the antecedents of corporate social responsibility (CSR) has been identified as "the first knowledge gap" in CSR research. Based on an extensive literature review, the present study outlines a conceptual multilevel model of CSR, then designs and empirically validates an operational multilevel model of the principal driving factors affecting corporate environmental responsibility (CER), as a measure of CSR. Both conceptual and operational models incorporate three levels of analysis: institutional, organizational, and individual. The multilevel nature of the design allows for the assessment of the relative importance of the levels and of their components in the achievement of CER. Unweighted least squares (ULS) regression analysis reveals that the institutional-level variables have medium relationships with CER, some variables having a negative effect. The organizational level is revealed as having strong and positive significant relationships with CER, with organizational culture and managers' attitudes and behaviors as significant driving forces. The study demonstrates the importance of multilevel analysis in improving the understanding of CSR drivers, relative to single level models, even if the significance of specific drivers and levels may vary by context. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-06-24
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping lase pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1987-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1989-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-08-23
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1989-03-21
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration. 6 figs.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Synergistic High Charge-Storage Capacity for Multi-level Flexible Organic Flash Memory
NASA Astrophysics Data System (ADS)
Kang, Minji; Khim, Dongyoon; Park, Won-Tae; Kim, Jihong; Kim, Juhwan; Noh, Yong-Young; Baeg, Kang-Jun; Kim, Dong-Yu
2015-07-01
Electret and organic floating-gate memories are next-generation flash storage mediums for printed organic complementary circuits. While each flash memory can be easily fabricated using solution processes on flexible plastic substrates, promising their potential for on-chip memory organization is limited by unreliable bit operation and high write loads. We here report that new architecture could improve the overall performance of organic memory, and especially meet high storage for multi-level operation. Our concept depends on synergistic effect of electrical characterization in combination with a polymer electret (poly(2-vinyl naphthalene) (PVN)) and metal nanoparticles (Copper). It is distinguished from mostly organic nano-floating-gate memories by using the electret dielectric instead of general tunneling dielectric for additional charge storage. The uniform stacking of organic layers including various dielectrics and poly(3-hexylthiophene) (P3HT) as an organic semiconductor, followed by thin-film coating using orthogonal solvents, greatly improve device precision despite easy and fast manufacture. Poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] as high-k blocking dielectric also allows reduction of programming voltage. The reported synergistic organic memory devices represent low power consumption, high cycle endurance, high thermal stability and suitable retention time, compared to electret and organic nano-floating-gate memory devices.
Rockett, Ian R H; Jiang, Shuhan; Yang, Qian; Yang, Tingzhong; Yang, Xiaozhao Y; Peng, Sihui; Yu, Lingwei
2017-08-18
This study estimated the prevalence of road traffic injury among Chinese urban residents and examined individual and regional-level correlates. A cross-sectional multistage process was used to sample residents from 21 selected cities in China. Survey respondents reported their history of road traffic injury in the past 12 months through a community survey. Multilevel, multivariable logistic regression analysis was used to identify injury correlates. Based on a retrospective 12-month reporting window, road traffic injury prevalence among urban residents was 13.2%. Prevalence of road traffic injury, by type, was 8.7, 8.7, 8.5, and 7.7% in the automobile, bicycle, motorcycle, and pedestrian categories, respectively. Multilevel analysis showed that prevalence of road traffic injury was positively associated with minority status, income, and mental health disorder score at the individual level. Regionally, road traffic injury was associated with geographic location of residence and prevalence of mental health disorders. Both individual and regional-level variables were associated with road traffic injury among Chinese urban residents, a finding whose implications transcend wholesale imported generic solutions. This descriptive research demonstrates an urgent need for longitudinal studies across China on risk and protective factors, in order to inform injury etiology, surveillance, prevention, treatment, and evaluation.
Synergistic High Charge-Storage Capacity for Multi-level Flexible Organic Flash Memory.
Kang, Minji; Khim, Dongyoon; Park, Won-Tae; Kim, Jihong; Kim, Juhwan; Noh, Yong-Young; Baeg, Kang-Jun; Kim, Dong-Yu
2015-07-23
Electret and organic floating-gate memories are next-generation flash storage mediums for printed organic complementary circuits. While each flash memory can be easily fabricated using solution processes on flexible plastic substrates, promising their potential for on-chip memory organization is limited by unreliable bit operation and high write loads. We here report that new architecture could improve the overall performance of organic memory, and especially meet high storage for multi-level operation. Our concept depends on synergistic effect of electrical characterization in combination with a polymer electret (poly(2-vinyl naphthalene) (PVN)) and metal nanoparticles (Copper). It is distinguished from mostly organic nano-floating-gate memories by using the electret dielectric instead of general tunneling dielectric for additional charge storage. The uniform stacking of organic layers including various dielectrics and poly(3-hexylthiophene) (P3HT) as an organic semiconductor, followed by thin-film coating using orthogonal solvents, greatly improve device precision despite easy and fast manufacture. Poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] as high-k blocking dielectric also allows reduction of programming voltage. The reported synergistic organic memory devices represent low power consumption, high cycle endurance, high thermal stability and suitable retention time, compared to electret and organic nano-floating-gate memory devices.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Stange, Jonathan P; MacNamara, Annmarie; Kennedy, Amy E; Hajcak, Greg; Phan, K Luan; Klumpp, Heide
2017-06-23
Single-trial-level analyses afford the ability to link neural indices of elaborative attention (such as the late positive potential [LPP], an event-related potential) with downstream markers of attentional processing (such as reaction time [RT]). This approach can provide useful information about individual differences in information processing, such as the ability to adapt behavior based on attentional demands ("brain-behavioral adaptability"). Anxiety and depression are associated with maladaptive information processing implicating aberrant cognition-emotion interactions, but whether brain-behavioral adaptability predicts response to psychotherapy is not known. We used a novel person-centered, trial-level analysis approach to link neural indices of stimulus processing to behavioral responses and to predict treatment outcome. Thirty-nine patients with anxiety and/or depression received 12 weeks of cognitive behavioral therapy (CBT). Prior to treatment, patients performed a speeded reaction-time task involving briefly-presented pairs of aversive and neutral pictures while electroencephalography was recorded. Multilevel modeling demonstrated that larger LPPs predicted slower responses on subsequent trials, suggesting that increased attention to the task-irrelevant nature of pictures interfered with reaction time on subsequent trials. Whereas using LPP and RT averages did not distinguish CBT responders from nonresponders, in trial-level analyses individuals who demonstrated greater ability to benefit behaviorally (i.e., faster RT) from smaller LPPs on the previous trial (greater brain-behavioral adaptability) were more likely to respond to treatment and showed greater improvements in depressive symptoms. These results highlight the utility of trial-level analyses to elucidate variability in within-subjects, brain-behavioral attentional coupling in the context of emotion processing, in predicting response to CBT for emotional disorders. Copyright © 2017 Elsevier Ltd. All rights reserved.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Solution-Focused Therapy as a Culturally Acknowledging Approach with American Indians
ERIC Educational Resources Information Center
Meyer, Dixie D.; Cottone, R. Rocco
2013-01-01
Limited literature is available applying specific theoretical orientations with American Indians. Solution-focused therapy may be appropriate, given the client-identified solutions, the egalitarian counselor/client relationship, the use of relationships, and the view that change is inevitable. However, adaption of scaling questions and the miracle…
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
Schölmerich, Vera L N; Kawachi, Ichiro
2016-06-01
Scholars and practitioners frequently make recommendations to develop family planning interventions that are "multilevel." Such interventions take explicit account of the role of environments by incorporating multilevel or social-ecological frameworks into their design and implementation. However, research on how interventions have translated these concepts into practice in the field of family planning-and generally in public health-remains scarce. This article seeks to review the current definitions of multilevel interventions and their operationalization in the field of family planning. First, we highlight the divergent definitions of multilevel interventions and show the persistent ambiguity around this term. We argue that interventions involving activities at several levels but lacking targets (i.e., objectives) to create change on more than one level have not incorporated a social-ecological framework and should therefore not be considered as "multilevel." In a second step, we assess the extent to which family planning interventions have successfully incorporated a social-ecological framework. To this end, the 63 studies featured in Mwaikambo et al.'s systematic review on family planning interventions were reexamined. This assessment indicates that the multilevel or social-ecological perspective has seldom been translated into interventions. Specifically, the majority of interventions involved some form of activity at the community and/or organizational level, yet targeted and measured intrapersonal change as opposed to explicitly targeting/measuring environmental modification. © 2016 Society for Public Health Education.
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
The Aids' Requirements of Children with Severe Multiple Handicaps and the People Looking after Them.
ERIC Educational Resources Information Center
Anden, Gerd
The report presents findings from interviews with 10 families with children (4-19 years old) with severe mental retardation and multiple disabilities regarding the need for technical aids and adaptations in their homes. The following areas are addressed and examples of solutions proposed: hygienic aids (hot water adaptations, travel adaptations,…
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations
NASA Astrophysics Data System (ADS)
Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.
2012-05-01
We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.
Using Visual Analysis to Evaluate and Refine Multilevel Models of Single-Case Studies
ERIC Educational Resources Information Center
Baek, Eun Kyeng; Petit-Bois, Merlande; Van den Noortgate, Wim; Beretvas, S. Natasha; Ferron, John M.
2016-01-01
In special education, multilevel models of single-case research have been used as a method of estimating treatment effects over time and across individuals. Although multilevel models can accurately summarize the effect, it is known that if the model is misspecified, inferences about the effects can be biased. Concern with the potential for model…
ERIC Educational Resources Information Center
Schölmerich, Vera L. N.; Kawachi, Ichiro
2016-01-01
Multilevel interventions are inspired by socio-ecological models, and seek to create change on various levels--for example by increasing the health literacy of individuals as well as modifying the social norms within a community. Despite becoming a buzzword in public health, actual multilevel interventions remain scarce. In this commentary, we…
Multilevel Modeling and School Psychology: A Review and Practical Example
ERIC Educational Resources Information Center
Graves, Scott L., Jr.; Frohwerk, April
2009-01-01
The purpose of this article is to provide an overview of the state of multilevel modeling in the field of school psychology. The authors provide a systematic assessment of published research of multilevel modeling studies in 5 journals devoted to the research and practice of school psychology. In addition, a practical example from the nationally…
Teaching ESL in a Multilevel Classroom. Adult Education Series #13. Refugee Education Guide.
ERIC Educational Resources Information Center
Center for Applied Linguistics, Washington, DC. Language and Orientation Resource Center.
Adult refugee English as a second language (ESL) programs are often mandated to serve all who sign up for instruction, a requirement that results in multilevel classes. This guide describes and discusses this and other factors which contribute to the existence of multilevel and/or heterogeneous classes, and provides some practical approaches and…
Perceived sources of stress amongst Chilean and Argentinean dental students.
Fonseca, J; Divaris, K; Villalba, S; Pizarro, S; Fernandez, M; Codjambassis, A; Villa-Torres, L; Polychronopoulou, A
2013-02-01
The prevalence of high levels of stress as well as its multilevel consequences is well documented amongst students in the health sciences, and particularly in dentistry. However, investigations of perceived stress amongst Spanish-speaking student groups are sparse. This study aimed to (i) describe the translation, adaptation and psychometric properties of a Spanish version of the Dental Environment Stressors questionnaire and (ii) to examine the perceived sources of stress and their associations with the students' study year and gender in two dental schools in Latin America. All students officially registered in the dental schools of the University of San Sebastian (USS) in Chile and the Catholic University of Cordoba (CUC) in Argentina were invited to participate in the study. The DES30 questionnaire was adapted in Spanish using translation/back-translation, an expert bilingual committee, and consensus building. Cronbach's alpha was used to measure the instrument's internal consistency, and iterated principal factor analysis with promax rotation was employed to explore its underlying factor structure. Descriptive, bivariate and multivariate methods were used to examine the patterns of association between individual stressors, factor scores and students' characteristics. Three hundred and four students comprised the study's analytical sample, with two-thirds of those being female. The DES30-Sp demonstrated good internal consistency (Cronbach's α = 0.89). A four-factor solution emerged and included 'academic workload', 'clinical training', 'time constraints' and 'self-efficacy beliefs' factors. 'Fear of failing a course or a year', 'examinations and grades' and 'lack of time for relaxation' were amongst the top individual-item stressors reported by students in both schools. Amongst this group of undergraduate dental students, those in Argentina, in higher study year, and females reported higher perceived stress. Increased workload, time constraints and some aspects of clinical training were the top stressors of approximately 300 Chilean and Argentinean dental undergraduates. Some variations between schools, males and females and study years were noted. The Spanish version of the DES30 questionnaire performed well, but future studies should evaluate the instrument's properties in larger and more diverse dental student populations. © 2012 John Wiley & Sons A/S.
A graph based algorithm for adaptable dynamic airspace configuration for NextGen
NASA Astrophysics Data System (ADS)
Savai, Mehernaz P.
The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major (busy) airports need to be given more importance and hence treated separately. Thus the algorithm takes into account the air route structure while finding a balance between sector workloads. The performance of the proposed algorithms and performance metrics is validated with the Enhanced Traffic Management System (ETMS) air traffic data.
Statistical efficiency of adaptive algorithms.
Widrow, Bernard; Kamenetsky, Max
2003-01-01
The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.
Intermediate and advanced topics in multilevel logistic regression analysis
Merlo, Juan
2017-01-01
Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher‐level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within‐cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population‐average effect of covariates measured at the subject and cluster level, in contrast to the within‐cluster or cluster‐specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster‐level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28543517