Sample records for explicit time step

  1. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  2. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  3. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  4. A three dimensional multigrid multiblock multistage time stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.

    1991-01-01

    A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.

  5. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  6. Solution of the Average-Passage Equations for the Incompressible Flow through Multiple-Blade-Row Turbomachinery

    DTIC Science & Technology

    1994-02-01

    numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence

  7. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  8. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  9. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  10. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  11. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  12. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  13. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  14. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  15. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  16. Incompressible spectral-element method: Derivation of equations

    NASA Technical Reports Server (NTRS)

    Deanna, Russell G.

    1993-01-01

    A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.

  17. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  18. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  19. An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2013-09-01

    In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.

  20. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    NASA Astrophysics Data System (ADS)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  1. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  2. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  3. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  4. An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.

  5. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  6. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  7. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  8. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  9. Enforcing the Courant–Friedrichs–Lewy condition in explicitly conservative local time stepping schemes

    DOE PAGES

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-01-30

    In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less

  10. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    USGS Publications Warehouse

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  11. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  12. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  13. A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)

    EPA Science Inventory

    Abstract

    Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...

  14. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  15. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  16. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  17. A comparison of the performance of 1st order and 2nd order turbulence models when solving the RANS equations in reproducing the liquid film length unsteady response to momentum flux ratio in Gas-Centered Swirl-Coaxial Injectors in Rocket Engine Applications

    DTIC Science & Technology

    2012-06-07

    scheme for the VOF requires the use of the explicit solver to advance the solution in time. The drawback of using the explicit solver is that such ap...proach required much smaller time steps to guarantee that a converged and stable solution is obtained during each fractional time step (Global...Comparable results were obtained for the solutions with the RSM model. 50x 25x 100x25x 25x200x 0.000 0.002 0.004 0.006 0.008 0.010 0 100 200 300

  18. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  19. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  20. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  1. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  2. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  3. Higher-order hybrid implicit/explicit FDTD time-stepping

    NASA Astrophysics Data System (ADS)

    Tierens, W.

    2016-12-01

    Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.

  4. Explicit finite difference predictor and convex corrector with applications to hyperbolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, C.; Dey, S. K.

    1983-01-01

    An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.

  5. Representation of Nucleation Mode Microphysics in a Global Aerosol Model with Sectional Microphysics

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Pierce, J. R.; Adams, P. J.

    2013-01-01

    In models, nucleation mode (1 nm

  6. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  7. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  8. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  9. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  10. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  11. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  12. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  13. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  14. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  15. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  16. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  17. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  18. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.

  19. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.

  20. The "Motor" in Implicit Motor Sequence Learning: A Foot-stepping Serial Reaction Time Task.

    PubMed

    Du, Yue; Clark, Jane E

    2018-05-03

    This protocol describes a modified serial reaction time (SRT) task used to study implicit motor sequence learning. Unlike the classic SRT task that involves finger-pressing movements while sitting, the modified SRT task requires participants to step with both feet while maintaining a standing posture. This stepping task necessitates whole body actions that impose postural challenges. The foot-stepping task complements the classic SRT task in several ways. The foot-stepping SRT task is a better proxy for the daily activities that require ongoing postural control, and thus may help us better understand sequence learning in real-life situations. In addition, response time serves as an indicator of sequence learning in the classic SRT task, but it is unclear whether response time, reaction time (RT) representing mental process, or movement time (MT) reflecting the movement itself, is a key player in motor sequence learning. The foot-stepping SRT task allows researchers to disentangle response time into RT and MT, which may clarify how motor planning and movement execution are involved in sequence learning. Lastly, postural control and cognition are interactively related, but little is known about how postural control interacts with learning motor sequences. With a motion capture system, the movement of the whole body (e.g., the center of mass (COM)) can be recorded. Such measures allow us to reveal the dynamic processes underlying discrete responses measured by RT and MT, and may aid in elucidating the relationship between postural control and the explicit and implicit processes involved in sequence learning. Details of the experimental set-up, procedure, and data processing are described. The representative data are adopted from one of our previous studies. Results are related to response time, RT, and MT, as well as the relationship between the anticipatory postural response and the explicit processes involved in implicit motor sequence learning.

  1. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  2. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  3. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  4. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  5. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    NASA Astrophysics Data System (ADS)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  6. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  7. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  8. Hyperbolic heat conduction problems involving non-Fourier effects - Numerical simulations via explicit Lax-Wendroff/Taylor-Galerkin finite element formulations

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Namburu, Raju R.

    1989-01-01

    Numerical simulations are presented for hyperbolic heat-conduction problems that involve non-Fourier effects, using explicit, Lax-Wendroff/Taylor-Galerkin FEM formulations as the principal computational tool. Also employed are smoothing techniques which stabilize the numerical noise and accurately predict the propagating thermal disturbances. The accurate capture of propagating thermal disturbances at characteristic time-step values is achieved; numerical test cases are presented which validate the proposed hyperbolic heat-conduction problem concepts.

  9. IMEX HDG-DG: A coupled implicit hybridized discontinuous Galerkin and explicit discontinuous Galerkin approach for Euler systems on cubed sphere.

    NASA Astrophysics Data System (ADS)

    Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.

    2017-12-01

    We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.

  10. Convergence speeding up in the calculation of the viscous flow about an airfoil

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Rossow, C.

    1988-01-01

    A finite volume method to solve the three dimensional Navier-Stokes equations was developed. It is based on a cell-vertex scheme with central differences and explicit Runge-Kutta time steps. A good convergence for a stationary solution was obtained by the use of local time steps, implicit smoothing of the residues, a multigrid algorithm, and a carefully controlled artificial dissipative term. The method is illustrated by results for transonic profiles and airfoils. The method allows a routine solution of the Navier-Stokes equations.

  11. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  12. A multistage time-stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1985-01-01

    A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.

  13. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  14. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  15. Simplified filtered Smith predictor for MIMO processes with multiple time delays.

    PubMed

    Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E

    2016-11-01

    This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Application of an unstructured grid flow solver to planes, trains and automobiles

    NASA Technical Reports Server (NTRS)

    Spragle, Gregory S.; Smith, Wayne A.; Yadlin, Yoram

    1993-01-01

    Rampant, an unstructured flow solver developed at Fluent Inc., is used to compute three-dimensional, viscous, turbulent, compressible flow fields within complex solution domains. Rampant is an explicit, finite-volume flow solver capable of computing flow fields using either triangular (2d) or tetrahedral (3d) unstructured grids. Local time stepping, implicit residual smoothing, and multigrid techniques are used to accelerate the convergence of the explicit scheme. The paper describes the Rampant flow solver and presents flow field solutions about a plane, train, and automobile.

  17. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  18. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  19. Exploring the Content of Intraoperative Teaching.

    PubMed

    Pernar, Luise I M; Peyre, Sarah E; Hasson, Rian M; Lipsitz, Stuart; Corso, Katherine; Ashley, Stanley W; Breen, Elizabeth M

    2016-01-01

    Much teaching to surgical residents takes place in the operating room (OR). The explicit content of what is taught in the OR, however, has not previously been described. This study investigated the content of what is taught in the OR, specifically during laparoscopic cholecystectomies (LCs), for which a cognitive task analysis (CTA), explicitly delineating individual steps, was available in the literature. A checklist of necessary technical and decision-making steps to be executed during performance of LCs, anchored in the previously published CTA, was developed. A convenience sample of LCs was identified over a 12-month period from February 2011 to February 2012. Using the checklist, a trained observer recorded explicit teaching that occurred regarding these steps during each observed case. All observations were tallied and analyzed. In all, 51 LCs were observed; 14 surgery attendings and 33 residents participated in the observed cases. Of 1042 observable teaching points, only 560 (53.7%) were observed during the study period. As a proportion of all observable steps, technical steps were observed more frequently, 377 (67.3%), than decision-making steps, 183 (32.7%). Also when focusing on technical and decision-making steps alone, technical steps were taught more frequently (60.9% vs 43.3%). Only approximately half of all possible observable teaching steps were explicitly taught during LCs in this study. Technical steps were more frequently taught than decision-making steps. These findings may have important implications: a better understanding of the content of intraoperative teaching would allow educators to steer residents' preoperative preparation, modulate intraoperative instruction by members of the surgical faculty, and guide residents to the most appropriate teaching venues. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. The development of an explicit thermochemical nonequilibrium algorithm and its application to compute three dimensional AFE flowfields

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    This study presents a three-dimensional explicit, finite-difference, shock-capturing numerical algorithm applied to viscous hypersonic flows in thermochemical nonequilibrium. The algorithm employs a two-temperature physical model. Equations governing the finite-rate chemical reactions are fully-coupled to the gas dynamic equations using a novel coupling technique. The new coupling method maintains stability in the explicit, finite-rate formulation while allowing relatively large global time steps. The code uses flux-vector accuracy. Comparisons with experimental data and other numerical computations verify the accuracy of the present method. The code is used to compute the three-dimensional flowfield over the Aeroassist Flight Experiment (AFE) vehicle at one of its trajectory points.

  1. Time Dependent Studies of Reactive Shocks in the Gas Phase

    DTIC Science & Technology

    1978-11-16

    which takes advantsge of time-stop splitting. The fluid dynamics time integration is performed by an explicit two step predictor - corrector technique...Nava Reearh l~oraoryARIA A WORK UNIT NUMBERS NasahRaington MC, raor 2037 NR Problem (1101-16Washngto, !) C , 2i176ONR Project RR024.02.41 Office of... self -consistently on their own characteristic time-scaies using the flux-corrected transport and selected asymptotic meothods, respectively. Results are

  2. A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...

  3. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  4. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  5. Comparative-effectiveness research to aid population decision making by relating clinical outcomes and quality-adjusted life years.

    PubMed

    Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M

    2013-04-01

    Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  6. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  7. Analysis of Time Filters in Multistep Methods

    NASA Astrophysics Data System (ADS)

    Hurl, Nicholas

    Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.

  8. Derivation of the Time-Reversal Anomaly for (2 +1 )-Dimensional Topological Phases

    NASA Astrophysics Data System (ADS)

    Tachikawa, Yuji; Yonekura, Kazuya

    2017-09-01

    We prove an explicit formula conjectured recently by Wang and Levin for the anomaly of time-reversal symmetry in (2 +1 )-dimensional fermionic topological quantum field theories. The crucial step is to determine the cross-cap state in terms of the modular S matrix and T2 eigenvalues, generalizing the recent analysis by Barkeshli et al. in the bosonic case.

  9. Multigrid for hypersonic viscous two- and three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.

    1991-01-01

    The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.

  10. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  11. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  12. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  13. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  14. Toward Modeling the Learner's Personality Using Educational Games

    ERIC Educational Resources Information Center

    Essalmi, Fathi; Tlili, Ahmed; Ben Ayed, Leila Jemni; Jemmi, Mohamed

    2017-01-01

    Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this…

  15. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  16. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  17. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  18. Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium

    NASA Astrophysics Data System (ADS)

    González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César

    2018-01-01

    This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.

  19. Constructing and Verifying Program Theory Using Source Documentation

    ERIC Educational Resources Information Center

    Renger, Ralph

    2010-01-01

    Making the program theory explicit is an essential first step in Theory Driven Evaluation (TDE). Once explicit, the program logic can be established making necessary links between the program theory, activities, and outcomes. Despite its importance evaluators often encounter situations where the program theory is not explicitly stated. Under such…

  20. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  1. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  2. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  3. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  4. Willed action, free will, and the stochastic neurodynamics of decision-making

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    It is shown that the randomness of the firing times of neurons in decision-making attractor neuronal networks that is present before the decision cues are applied can cause statistical fluctuations that influence the decision that will be taken. In this rigorous sense, it is possible to partially predict decisions before they are made. This raises issues about free will and determinism. There are many decision-making networks in the brain. Some decision systems operate to choose between gene-specified rewards such as taste, touch, and beauty (in for example the peacock's tail). Other processes capable of planning ahead with multiple steps held in working memory may require correction by higher order thoughts that may involve explicit, conscious, processing. The explicit system can allow the gene-specified rewards not to be selected or deferred. The decisions between the selfish gene-specified rewards, and the explicitly calculated rewards that are in the interests of the individual, the phenotype, may themselves be influenced by noise in the brain. When the explicit planning system does take the decision, it can report on its decision-making, and can provide a causal account rather than a confabulation about the decision process. We might use the terms “willed action” and “free will” to refer to the operation of the planning system that can think ahead over several steps held in working memory with which it can take explicit decisions. Reduced connectivity in some of the default mode cortical regions including the precuneus that are active during self-initiated action appears to be related to the reduction in the sense of self and agency, of causing willed actions, that can be present in schizophrenia. PMID:22973205

  5. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1987-01-01

    The technique of obtaining second-order oscillation-free total -variation-diminishing (TVD), scalar difference schemes by adding a limited diffusive flux ('smoothing') to a second-order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell-by-cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second-order spatial accuracy was found to have extremely restrictive time-step limitation. Switching to an implicit scheme removed the time-step limitation.

  6. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  7. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  8. Forest management planning for timber production: a sequential approach

    Treesearch

    Krishna P. Rustagi

    1978-01-01

    Explicit forest management planning for timber production beyond the first few years at any time necessitates use of information which can best be described as suspect. The two-step approach outlined here concentrates on the planning strategy over the next few years without losing sight of the long-run productivity. Frequent updating of the long-range and short-range...

  9. Explicit formulation of second and third order optical nonlinearity in the FDTD framework

    NASA Astrophysics Data System (ADS)

    Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas

    2018-01-01

    The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.

  10. Learning to Sing to a Different Tune: Identifying Means to Enrich Grammar Curricula through Diversifying Explicit Instruction

    ERIC Educational Resources Information Center

    Schenck, Andrew D.

    2014-01-01

    Characteristics of a grammatical feature, type of instruction, and proficiency level of the learner all contribute to the effectiveness of various types of explicit grammar curricula. Modern curricular designs and explicit pedagogical techniques must move beyond traditional one-size-fits-all strategies. This can be accomplished in two steps.…

  11. An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1988-01-01

    An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.

  12. Numerical stability of an explicit finite difference scheme for the solution of transient conduction in composite media

    NASA Technical Reports Server (NTRS)

    Campbell, W.

    1981-01-01

    A theoretical evaluation of the stability of an explicit finite difference solution of the transient temperature field in a composite medium is presented. The grid points of the field are assumed uniformly spaced, and media interfaces are either vertical or horizontal and pass through grid points. In addition, perfect contact between different media (infinite interfacial conductance) is assumed. A finite difference form of the conduction equation is not valid at media interfaces; therefore, heat balance forms are derived. These equations were subjected to stability analysis, and a computer graphics code was developed that permitted determination of a maximum time step for a given grid spacing.

  13. Development of an extended Kalman filter for the self-sensing application of a spring-biased shape memory alloy wire actuator

    NASA Astrophysics Data System (ADS)

    Gurung, H.; Banerjee, A.

    2016-02-01

    This report presents the development of an extended Kalman filter (EKF) to harness the self-sensing capability of a shape memory alloy (SMA) wire, actuating a linear spring. The stress and temperature of the SMA wire, constituting the state of the system, are estimated using the EKF, from the measured change in electrical resistance (ER) of the SMA. The estimated stress is used to compute the change in length of the spring, eliminating the need for a displacement sensor. The system model used in the EKF comprises the heat balance equation and the constitutive relation of the SMA wire coupled with the force-displacement behavior of a spring. Both explicit and implicit approaches are adopted to evaluate the system model at each time-update step of the EKF. Next, in the measurement-update step, estimated states are updated based on the measured electrical resistance. It has been observed that for the same time step, the implicit approach consumes less computational time than the explicit method. To verify the implementation, EKF estimated states of the system are compared with those of an established model for different inputs to the SMA wire. An experimental setup is developed to measure the actual spring displacement and ER of the SMA, for any time-varying voltage applied to it. The process noise covariance is decided using a heuristic approach, whereas the measurement noise covariance is obtained experimentally. Finally, the EKF is used to estimate the spring displacement for a given input and the corresponding experimentally obtained ER of the SMA. The qualitative agreement between the EKF estimated displacement with that obtained experimentally reveals the true potential of this approach to harness the self-sensing capability of the SMA.

  14. Improving carbon monitoring and reporting in forests using spatially-explicit information.

    PubMed

    Boisvenue, Céline; Smiley, Byron P; White, Joanne C; Kurz, Werner A; Wulder, Michael A

    2016-12-01

    Understanding and quantifying carbon (C) exchanges between the biosphere and the atmosphere-specifically the process of C removal from the atmosphere, and how this process is changing-is the basis for developing appropriate adaptation and mitigation strategies for climate change. Monitoring forest systems and reporting on greenhouse gas (GHG) emissions and removals are now required components of international efforts aimed at mitigating rising atmospheric GHG. Spatially-explicit information about forests can improve the estimates of GHG emissions and removals. However, at present, remotely-sensed information on forest change is not commonly integrated into GHG reporting systems. New, detailed (30-m spatial resolution) forest change products derived from satellite time series informing on location, magnitude, and type of change, at an annual time step, have recently become available. Here we estimate the forest GHG balance using these new Landsat-based change data, a spatial forest inventory, and develop yield curves as inputs to the Carbon Budget Model of the Canadian Forest Sector (CBM-CFS3) to estimate GHG emissions and removals at a 30 m resolution for a 13 Mha pilot area in Saskatchewan, Canada. Our results depict the forests as cumulative C sink (17.98 Tg C or 0.64 Tg C year -1 ) between 1984 and 2012 with an average C density of 206.5 (±0.6) Mg C ha -1 . Comparisons between our estimates and estimates from Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS) were possible only on a subset of our study area. In our simulations the area was a C sink, while the official reporting simulations, it was a C source. Forest area and overall C stock estimates also differ between the two simulated estimates. Both estimates have similar uncertainties, but the spatially-explicit results we present here better quantify the potential improvement brought on by spatially-explicit modelling. We discuss the source of the differences between these estimates. This study represents an important first step towards the integration of spatially-explicit information into Canada's NFCMARS.

  15. Oceanic signals in rapid polar motion: results from a barotropic forward model with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Schindelegger, Michael; Quinn, Katherine J.; Ponte, Rui M.

    2017-04-01

    Numerical modeling of non-tidal variations in ocean currents and bottom pressure has played a key role in closing the excitation budget of Earth's polar motion for a wide range of periodicities. Non-negligible discrepancies between observations and model accounts of pole position changes prevail, however, on sub-monthly time scales and call for examination of hydrodynamic effects usually omitted in general circulation models. Specifically, complete hydrodynamic cores must incorporate self-attraction and loading (SAL) feedbacks on redistributed water masses, effects that produces ocean bottom pressure perturbations of typically about 10% relative to the computed mass variations. Here, we report on a benchmark simulation with a near-global, barotropic forward model forced by wind stress, atmospheric pressure, and a properly calculated SAL term. The latter is obtained by decomposing ocean mass anomalies on a 30-minute grid into spherical harmonics at each time step and applying Love numbers to account for seafloor deformation and changed gravitational attraction. The increase in computational time at each time step is on the order of 50%. Preliminary results indicate that the explicit consideration of SAL in the forward runs increases the fidelity of modeled polar motion excitations, in particular on time scales shorter than 5 days as evident from cross spectral comparisons with geodetic excitation. Definite conclusions regarding the relevance of SAL in simulating rapid polar motion are, however, still hampered by the model's incomplete domain representation that excludes parts of the highly energetic Arctic Ocean.

  16. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1986-01-01

    The technique of obtaining second order, oscillation free, total variation diminishing (TVD), scalar difference schemes by adding a limited diffusion flux (smoothing) to a second order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell by cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second order spatial accuracy was found to have an extremely restrictive time step limitation (Delta t less than Delta x squared). Switching to an implicit scheme removed the time step limitation.

  17. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  18. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  19. Peridynamic thermal diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail

    This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.

  20. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  1. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  2. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  3. Implicit Runge-Kutta Methods with Explicit Internal Stages

    NASA Astrophysics Data System (ADS)

    Skvortsov, L. M.

    2018-03-01

    The main computational costs of implicit Runge-Kutta methods are caused by solving a system of algebraic equations at every step. By introducing explicit stages, it is possible to increase the stage (or pseudo-stage) order of the method, which makes it possible to increase the accuracy and avoid reducing the order in solving stiff problems, without additional costs of solving algebraic equations. The paper presents implicit methods with an explicit first stage and one or two explicit internal stages. The results of solving test problems are compared with similar methods having no explicit internal stages.

  4. A New Family of Compact High Order Coupled Time-Space Unconditionally Stable Vertical Advection Schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, F.; Debreu, L.

    2016-02-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.

  5. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  6. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  7. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  8. Influence of numerical dissipation in computing supersonic vortex-dominated flows

    NASA Technical Reports Server (NTRS)

    Kandil, O. A.; Chuang, A.

    1986-01-01

    Steady supersonic vortex-dominated flows are solved using the unsteady Euler equations for conical and three-dimensional flows around sharp- and round-edged delta wings. The computational method is a finite-volume scheme which uses a four-stage Runge-Kutta time stepping with explicit second- and fourth-order dissipation terms. The grid is generated by a modified Joukowski transformation. The steady flow solution is obtained through time-stepping with initial conditions corresponding to the freestream conditions, and the bow shock is captured as a part of the solution. The scheme is applied to flat-plate and elliptic-section wings with a leading edge sweep of 70 deg at an angle of attack of 10 deg and a freestream Mach number of 2.0. Three grid sizes of 29 x 39, 65 x 65 and 100 x 100 have been used. The results for sharp-edged wings show that they are consistent with all grid sizes and variation of the artificial viscosity coefficients. The results for round-edged wings show that separated and attached flow solutions can be obtained by varying the artificial viscosity coefficients. They also show that the solutions are independent of the way time stepping is done. Local time-stepping and global minimum time-steeping produce same solutions.

  9. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  10. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  11. Interactive controls of herbivory and fluvial dynamics on landscape vegetation patterns on the Tanana River floodplain, interior Alaska.

    Treesearch

    Lem G. Butler; Knut Kielland; T. Scott Rupp; Thomas A. Hanley

    2007-01-01

    We examined the interactive effects of mammalian herbivory and fluvial dynamics on vegetation dynamics and composition along the Tanana River in interior Alaska between Fairbanks and Manley Hot Springs. We used a spatially explicit model of landscape dynamics (ALFRESCO) to simulate vegetation changes on a 1-year time-step. The model was run for 250 years and was...

  12. High speed inviscid compressible flow by the finite element method

    NASA Technical Reports Server (NTRS)

    Zienkiewicz, O. C.; Loehner, R.; Morgan, K.

    1984-01-01

    The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendonça, João M.; Grimm, Simon L.; Grosheintz, Luc

    We have designed and developed, from scratch, a global circulation model (GCM) named THOR that solves the three-dimensional nonhydrostatic Euler equations. Our general approach lifts the commonly used assumptions of a shallow atmosphere and hydrostatic equilibrium. We solve the “pole problem” (where converging meridians on a sphere lead to increasingly smaller time steps near the poles) by implementing an icosahedral grid. Irregularities in the grid, which lead to grid imprinting, are smoothed using the “spring dynamics” technique. We validate our implementation of spring dynamics by examining calculations of the divergence and gradient of test functions. To prevent the computational timemore » step from being bottlenecked by having to resolve sound waves, we implement a split-explicit method together with a horizontally explicit and vertically implicit integration. We validate our GCM by reproducing the Earth and hot-Jupiter-like benchmark tests. THOR was designed to run on graphics processing units (GPUs), which allows for physics modules (radiative transfer, clouds, chemistry) to be added in the future, and is part of the open-source Exoclimes Simulation Platform (www.exoclime.org).« less

  14. Three-dimensional inverse modelling of damped elastic wave propagation in the Fourier domain

    NASA Astrophysics Data System (ADS)

    Petrov, Petr V.; Newman, Gregory A.

    2014-09-01

    3-D full waveform inversion (FWI) of seismic wavefields is routinely implemented with explicit time-stepping simulators. A clear advantage of explicit time stepping is the avoidance of solving large-scale implicit linear systems that arise with frequency domain formulations. However, FWI using explicit time stepping may require a very fine time step and (as a consequence) significant computational resources and run times. If the computational challenges of wavefield simulation can be effectively handled, an FWI scheme implemented within the frequency domain utilizing only a few frequencies, offers a cost effective alternative to FWI in the time domain. We have therefore implemented a 3-D FWI scheme for elastic wave propagation in the Fourier domain. To overcome the computational bottleneck in wavefield simulation, we have exploited an efficient Krylov iterative solver for the elastic wave equations approximated with second and fourth order finite differences. The solver does not exploit multilevel preconditioning for wavefield simulation, but is coupled efficiently to the inversion iteration workflow to reduce computational cost. The workflow is best described as a series of sequential inversion experiments, where in the case of seismic reflection acquisition geometries, the data has been laddered such that we first image highly damped data, followed by data where damping is systemically reduced. The key to our modelling approach is its ability to take advantage of solver efficiency when the elastic wavefields are damped. As the inversion experiment progresses, damping is significantly reduced, effectively simulating non-damped wavefields in the Fourier domain. While the cost of the forward simulation increases as damping is reduced, this is counterbalanced by the cost of the outer inversion iteration, which is reduced because of a better starting model obtained from the larger damped wavefield used in the previous inversion experiment. For cross-well data, it is also possible to launch a successful inversion experiment without laddering the damping constants. With this type of acquisition geometry, the solver is still quite effective using a small fixed damping constant. To avoid cycle skipping, we also employ a multiscale imaging approach, in which frequency content of the data is also laddered (with the data now including both reflection and cross-well data acquisition geometries). Thus the inversion process is launched using low frequency data to first recover the long spatial wavelength of the image. With this image as a new starting model, adding higher frequency data refines and enhances the resolution of the image. FWI using laddered frequencies with an efficient damping schemed enables reconstructing elastic attributes of the subsurface at a resolution that approaches half the smallest wavelength utilized to image the subsurface. We show the possibility of effectively carrying out such reconstructions using two to six frequencies, depending upon the application. Using the proposed FWI scheme, massively parallel computing resources are essential for reasonable execution times.

  15. Multigrid calculation of three-dimensional turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1989-01-01

    Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.

  16. Solution procedure of dynamical contact problems with friction

    NASA Astrophysics Data System (ADS)

    Abdelhakim, Lotfi

    2017-07-01

    Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.

  17. A New Time-Space Accurate Scheme for Hyperbolic Problems. 1; Quasi-Explicit Case

    NASA Technical Reports Server (NTRS)

    Sidilkover, David

    1998-01-01

    This paper presents a new discretization scheme for hyperbolic systems of conservations laws. It satisfies the TVD property and relies on the new high-resolution mechanism which is compatible with the genuinely multidimensional approach proposed recently. This work can be regarded as a first step towards extending the genuinely multidimensional approach to unsteady problems. Discontinuity capturing capabilities and accuracy of the scheme are verified by a set of numerical tests.

  18. Facility Composer Design Wizards: A Method for Extensible Codified Design Logic Based on Explicit Facility Criteria

    DTIC Science & Technology

    2004-11-01

    institutionalized approaches to solving problems, company/client specific mission priorities (for example, State Department vs . Army Reserve and... independent variables that let the user leave a particular step before fin- ishing all the items, and to return at a later time without any data loss. One...Sales, Main Exchange, Miscellane- ous Shops, Post Office, Restaurant , and Theater.) Authorized customers served 04 Other criteria pro- vided by the

  19. Some aspects of algorithm performance and modeling in transient analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1981-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).

  20. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  1. Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1997-01-01

    A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.

  2. Unstructured grid methods for the simulation of 3D transient flows

    NASA Technical Reports Server (NTRS)

    Morgan, K.; Peraire, J.; Peiro, J.

    1994-01-01

    A description of the research work undertaken under NASA Research Grant NAGW-2962 has been given. Basic algorithmic development work, undertaken for the simulation of steady three dimensional inviscid flow, has been used as the basis for the construction of a procedure for the simulation of truly transient flows in three dimensions. To produce a viable procedure for implementation on the current generation of computers, moving boundary components are simulated by fixed boundaries plus a suitably modified boundary condition. Computational efficiency is increased by the use of an implicit time stepping scheme in which the equation system is solved by explicit multistage time stepping with multigrid acceleration. The viability of the proposed approach has been demonstrated by considering the application of the procedure to simulation of a transonic flow over an oscillating ONERA M6 wing.

  3. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  4. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Treesearch

    David Lewis; Ralph Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  5. Explicitly Teaching Social Skills Schoolwide: Using a Matrix to Guide Instruction

    ERIC Educational Resources Information Center

    Simonsen, Brandi; Myers, Diane; Everett, Susannah; Sugai, George; Spencer, Rebecca; LaBreck, Chris

    2012-01-01

    Socially skilled students are more successful in school. Just like academic skills, social skills need to be explicitly taught. Students, including students who display at-risk behavior, benefit when social skills instruction is delivered schoolwide as part of a comprehensive intervention approach. This article presents a seven-step action…

  6. Optimum design of hybrid phase locked loops

    NASA Technical Reports Server (NTRS)

    Lee, P.; Yan, T.

    1981-01-01

    The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.

  7. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  8. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  9. Damping efficiency of the Tchamwa-Wielgosz explicit dissipative scheme under instantaneous loading conditions

    NASA Astrophysics Data System (ADS)

    Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard

    2009-11-01

    To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).

  10. Modeling the heterogeneous catalytic activity of a single nanoparticle using a first passage time distribution formalism

    NASA Astrophysics Data System (ADS)

    Das, Anusheela; Chaudhury, Srabanti

    2015-11-01

    Metal nanoparticles are heterogeneous catalysts and have a multitude of non-equivalent, catalytic sites on the nanoparticle surface. The product dissociation step in such reaction schemes can follow multiple pathways. Proposed here for the first time is a completely analytical theoretical framework, based on the first passage time distribution, that incorporates the effect of heterogeneity in nanoparticle catalysis explicitly by considering multiple, non-equivalent catalytic sites on the nanoparticle surface. Our results show that in nanoparticle catalysis, the effect of dynamic disorder is manifested even at limiting substrate concentrations in contrast to an enzyme that has only one well-defined active site.

  11. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  12. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  13. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  14. Seakeeping with the semi-Lagrangian particle finite element method

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio

    2017-07-01

    The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

  15. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  16. A weak-coupling immersed boundary method for fluid-structure interaction with low density ratio of solid to fluid

    NASA Astrophysics Data System (ADS)

    Kim, Woojin; Lee, Injae; Choi, Haecheon

    2018-04-01

    We present a weak-coupling approach for fluid-structure interaction with low density ratio (ρ) of solid to fluid. For accurate and stable solutions, we introduce predictors, an explicit two-step method and the implicit Euler method, to obtain provisional velocity and position of fluid-structure interface at each time step, respectively. The incompressible Navier-Stokes equations, together with these provisional velocity and position at the fluid-structure interface, are solved in an Eulerian coordinate using an immersed-boundary finite-volume method on a staggered mesh. The dynamic equation of an elastic solid-body motion, together with the hydrodynamic force at the provisional position of the interface, is solved in a Lagrangian coordinate using a finite element method. Each governing equation for fluid and structure is implicitly solved using second-order time integrators. The overall second-order temporal accuracy is preserved even with the use of lower-order predictors. A linear stability analysis is also conducted for an ideal case to find the optimal explicit two-step method that provides stable solutions down to the lowest density ratio. With the present weak coupling, three different fluid-structure interaction problems were simulated: flows around an elastically mounted rigid circular cylinder, an elastic beam attached to the base of a stationary circular cylinder, and a flexible plate, respectively. The lowest density ratios providing stable solutions are searched for the first two problems and they are much lower than 1 (ρmin = 0.21 and 0.31, respectively). The simulation results agree well with those from strong coupling suggested here and also from previous numerical and experimental studies, indicating the efficiency and accuracy of the present weak coupling.

  17. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    NASA Astrophysics Data System (ADS)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  18. Limits of acceptable change and natural resources planning: when is LAC useful, when is it not?

    Treesearch

    David N. Cole; Stephen F. McCool

    1997-01-01

    There are ways to improve the LAC process and its implementational procedures. One significant procedural modification is the addition of a new step. This step — which becomes the first step in the process — involves more explicitly defining goals and desired conditions. For other steps in the process, clarifications of concept and terminology are advanced, as are...

  19. Single-crossover recombination in discrete time.

    PubMed

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  20. Agglomeration Multigrid for an Unstructured-Grid Flow Solver

    NASA Technical Reports Server (NTRS)

    Frink, Neal; Pandya, Mohagna J.

    2004-01-01

    An agglomeration multigrid scheme has been implemented into the sequential version of the NASA code USM3Dns, tetrahedral cell-centered finite volume Euler/Navier-Stokes flow solver. Efficiency and robustness of the multigrid-enhanced flow solver have been assessed for three configurations assuming an inviscid flow and one configuration assuming a viscous fully turbulent flow. The inviscid studies include a transonic flow over the ONERA M6 wing and a generic business jet with flow-through nacelles and a low subsonic flow over a high-lift trapezoidal wing. The viscous case includes a fully turbulent flow over the RAE 2822 rectangular wing. The multigrid solutions converged with 12%-33% of the Central Processing Unit (CPU) time required by the solutions obtained without multigrid. For all of the inviscid cases, multigrid in conjunction with an explicit time-stepping scheme performed the best with regard to the run time memory and CPU time requirements. However, for the viscous case multigrid had to be used with an implicit backward Euler time-stepping scheme that increased the run time memory requirement by 22% as compared to the run made without multigrid.

  1. Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-01-01

    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.

  2. Pre-gilbertian conceptions of terrestrial magnetism

    USGS Publications Warehouse

    Smith, P.J.

    1968-01-01

    It is now well known that William Gilbert, in his De Magnete of 1600, first suggested that the earth behaves as a great magnet. By their very nature, however, such explicit statements tend, in retrospect, to be emphasised at the expense of less explicit antecedent ideas and experiments, with the result that, in the example under consideration here, the impression has sometimes been given that before Gilbert there was not the slightest suspicion that the earth exerts influence on the magnetic needle. In fact, Gilbert's conclusion represented the culmination of many centuries of thought and experimentation on the subject. This essay traces the main steps in the evolutionary process from the idea that magnetic 'virtue' derived from the heave, through the gradual realisation that magnetism is closely associated with the earth, up to the time of Gilbert's definite statement. ?? 1968.

  3. Virtual prototyping of drop test using explicit analysis

    NASA Astrophysics Data System (ADS)

    Todorov, Georgi; Kamberov, Konstantin

    2017-12-01

    Increased requirements for reliability and safety, included in contemporary standards and norms, has high impact over new product development. New numerical techniques based on virtual prototyping technology, facilitates imrpoving product development cycle, resutling in reduced time/money spent for this stage as well as increased knowledge about certain failure mechanism. So called "drop test" became nearly a "must" step in development of any human operated product. This study aims to demonstrate dynamic behaviour assessment of a structure under impact loads, based on virtual prototyping using a typical nonlinear analysis - explicit dynamics. An example is presneted, based on a plastic container that is used as cartridge for a dispenser machine exposed to various work conditions. Different drop orientations were analyzed and critical load cases and design weaknesses have been found. Several design modifications have been proposed, based on detailed analyses results review.

  4. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  5. Does Classroom Explicitation of Initial Conceptions Favour Conceptual Change or Is It Counter-Productive?

    ERIC Educational Resources Information Center

    Potvin, Patrice; Mercier, Julien; Charland, Patrick; Riopel, Martin

    2012-01-01

    This research investigates the effect of classroom explicitation of initial conceptions (CEIC) on conceptual change in the context of learning electricity. Eight hundred and seventy five thirteen year-olds were tested in laboratory conditions to see if CEIC is or is not a productive step toward conceptual change. All students experienced a…

  6. Moderating Effects of Mathematics Anxiety on the Effectiveness of Explicit Timing

    ERIC Educational Resources Information Center

    Grays, Sharnita D.; Rhymer, Katrina N.; Swartzmiller, Melissa D.

    2017-01-01

    Explicit timing is an empirically validated intervention to increase problem completion rates by exposing individuals to a stopwatch and explicitly telling them of the time limit for the assignment. Though explicit timing has proven to be effective for groups of students, some students may not respond well to explicit timing based on factors such…

  7. Lagrangian continuum dynamics in ALEGRA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K. W.; Love, Edward

    Alegra is an ALE (Arbitrary Lagrangian-Eulerian) multi-material finite element code that emphasizes large deformations and strong shock physics. The Lagrangian continuum dynamics package in Alegra uses a Galerkin finite element spatial discretization and an explicit central-difference stepping method in time. The goal of this report is to describe in detail the characteristics of this algorithm, including the conservation and stability properties. The details provided should help both researchers and analysts understand the underlying theory and numerical implementation of the Alegra continuum hydrodynamics algorithm.

  8. On coupling fluid plasma and kinetic neutral physics models

    DOE PAGES

    Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...

    2017-03-01

    The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less

  9. An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Ferrari, A. J.

    1971-01-01

    A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.

  10. A numerical solution of the supersonic flow over a rearward facing step with transverse non-reacting hydrogen injection

    NASA Technical Reports Server (NTRS)

    Berman, H. A.; Anderson, J. D., Jr.; Drummond, J. P.

    1982-01-01

    The present investigation represents an application of computational fluid dynamics to a problem associated with the flow in the combustor region of a supersonic combustion ramjet engine (scramjet). The governing equations are considered, taking into account the Navier-Stokes equations, a molecular viscosity calculation, the molecular thermal conductivity, molecular diffusion, and a turbulence model. The employed numerical solution is patterned after the explicit, time-dependent, unsplit, predictor-corrector, finite-difference method given by MacCormack (1969). The calculation is concerned with the supersonic flow over a rearward-facing step with transverse H2 injection at conditions germane to the combustor region of a scramjet engine. The H2 jet acts as an effective body which essentially shields the primary flow from the rearward-facing step, thus substantially changing the wave pattern in the primary flow.

  11. An enhanced two-step floating catchment area (E2SFCA) method for measuring spatial accessibility to primary care physicians.

    PubMed

    Luo, Wei; Qi, Yi

    2009-12-01

    This paper presents an enhancement of the two-step floating catchment area (2SFCA) method for measuring spatial accessibility, addressing the problem of uniform access within the catchment by applying weights to different travel time zones to account for distance decay. The enhancement is proved to be another special case of the gravity model. When applying this enhanced 2SFCA (E2SFCA) to measure the spatial access to primary care physicians in a study area in northern Illinois, we find that it reveals spatial accessibility pattern that is more consistent with intuition and delineates more spatially explicit health professional shortage areas. It is easy to implement in GIS and straightforward to interpret.

  12. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  13. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  14. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  15. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  16. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  17. Current capabilities for simulating the extreme distortion of thin structures subjected to severe impacts

    NASA Technical Reports Server (NTRS)

    Key, Samuel W.

    1993-01-01

    The explicit transient dynamics technology in use today for simulating the impact and subsequent transient dynamic response of a structure has its origins in the 'hydrocodes' dating back to the late 1940's. The growth in capability in explicit transient dynamics technology parallels the growth in speed and size of digital computers. Computer software for simulating the explicit transient dynamic response of a structure is characterized by algorithms that use a large number of small steps. In explicit transient dynamics software there is a significant emphasis on speed and simplicity. The finite element technology used to generate the spatial discretization of a structure is based on a compromise between completeness of the representation for the physical processes modelled and speed in execution. That is, since it is expected in every calculation that the deformation will be finite and the material will be strained beyond the elastic range, the geometry and the associated gradient operators must be reconstructed, as well as complex stress-strain models evaluated at every time step. As a result, finite elements derived for explicit transient dynamics software use the simplest and barest constructions possible for computational efficiency while retaining an essential representation of the physical behavior. The best example of this technology is the four-node bending quadrilateral derived by Belytschko, Lin and Tsay. Today, the speed, memory capacity and availability of computer hardware allows a number of the previously used algorithms to be 'improved.' That is, it is possible with today's computing hardware to modify many of the standard algorithms to improve their representation of the physical process at the expense of added complexity and computational effort. The purpose is to review a number of these algorithms and identify the improvements possible. In many instances, both the older, faster version of the algorithm and the improved and somewhat slower version of the algorithm are found implemented together in software. Specifically, the following seven algorithmic items are examined: the invariant time derivatives of stress used in material models expressed in rate form; incremental objectivity and strain used in the numerical integration of the material models; the use of one-point element integration versus mean quadrature; shell elements used to represent the behavior of thin structural components; beam elements based on stress-resultant plasticity versus cross-section integration; the fidelity of elastic-plastic material models in their representation of ductile metals; and the use of Courant subcycling to reduce computational effort.

  18. Proposed best modeling practices for assessing the effects of ecosystem restoration on fish

    USGS Publications Warehouse

    Rose, Kenneth A; Sable, Shaye; DeAngelis, Donald L.; Yurek, Simeon; Trexler, Joel C.; Graf, William L.; Reed, Denise J.

    2015-01-01

    Large-scale aquatic ecosystem restoration is increasing and is often controversial because of the economic costs involved, with the focus of the controversies gravitating to the modeling of fish responses. We present a scheme for best practices in selecting, implementing, interpreting, and reporting of fish modeling designed to assess the effects of restoration actions on fish populations and aquatic food webs. Previous best practice schemes that tended to be more general are summarized, and they form the foundation for our scheme that is specifically tailored for fish and restoration. We then present a 31-step scheme, with supporting text and narrative for each step, which goes from understanding how the results will be used through post-auditing to ensure the approach is used effectively in subsequent applications. We also describe 13 concepts that need to be considered in parallel to these best practice steps. Examples of these concepts include: life cycles and strategies; variability and uncertainty; nonequilibrium theory; biological, temporal, and spatial scaling; explicit versus implicit representation of processes; and model validation. These concepts are often not considered or not explicitly stated and casual treatment of them leads to mis-communication and mis-understandings, which in turn, often underlie the resulting controversies. We illustrate a subset of these steps, and their associated concepts, using the three case studies of Glen Canyon Dam on the Colorado River, the wetlands of coastal Louisiana, and the Everglades. Use of our proposed scheme will require investment of additional time and effort (and dollars) to be done effectively. We argue that such an investment is well worth it and will more than pay back in the long run in effective and efficient restoration actions and likely avoided controversies and legal proceedings.

  19. "If I Can't Have You Nobody Will": Explicit Threats in the Context of Coercive Control.

    PubMed

    Logan, T K

    2017-02-01

    Physical assault is only one tool in partner abuse characterized by coercive control. Coercive control creates an ongoing state of fear and chronic stress. Explicit threats are an important component of coercive control yet have received limited research attention. This study examined 210 women with protective orders (POs) against abusive (ex)partners and their experiences of explicit threats including threats of harm and death, threats about harming friends and family, and actual threats to friends and family. There are 4 main findings from this study: (a) explicit threats of harm and death, threats about harming others, and actual threats to others are common both in the history of the abusive relationship as well as within 6 months prior to obtaining a PO but are only moderately correlated with each other; (b) the high-frequency threats of harm group had the highest rates of concurrent abuse, violence, distress, and fear; (c) the prevalence and frequency of threats changed over time for all 3 types of threats examined in this study; and (d) understanding the variety of threats partner abuse victims experience, especially threats of third-party harm, may be important in understanding the larger context and consequences of partner abuse. This study is an interim step toward a better understanding of the role of explicit threats in abusive relationships. Future research is needed to examine the prevalence, frequency, trajectory, features, context, and types of explicit threats that victims of partner abuse experience. This information may be especially key to understanding more about future risk of harm, risk of harm to others, victim distress and fear, and safety planning.

  20. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  1. Functional Communication Training in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Battaglia, Dana

    2017-01-01

    This article explicitly addresses the correlation between communication and behavior, and describes how to provide intervention addressing these two overlapping domains using an intervention called functional communication training (FCT; E. G. Carr & Durand, 1985) in individuals with ASD. A step-by-step process is outlined with supporting…

  2. The molecular biology of memory: cAMP, PKA, CRE, CREB-1, CREB-2, and CPEB

    PubMed Central

    2012-01-01

    The analysis of the contributions to synaptic plasticity and memory of cAMP, PKA, CRE, CREB-1, CREB-2, and CPEB has recruited the efforts of many laboratories all over the world. These are six key steps in the molecular biological delineation of short-term memory and its conversion to long-term memory for both implicit (procedural) and explicit (declarative) memory. I here first trace the background for the clinical and behavioral studies of implicit memory that made a molecular biology of memory storage possible, and then detail the discovery and early history of these six molecular steps and their roles in explicit memory. PMID:22583753

  3. Formulation of boundary conditions for the multigrid acceleration of the Euler and Navier Stokes equations

    NASA Technical Reports Server (NTRS)

    Jentink, Thomas Neil; Usab, William J., Jr.

    1990-01-01

    An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.

  4. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  5. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  6. Clipping in neurocontrol by adaptive dynamic programming.

    PubMed

    Fairbank, Michael; Prokhorov, Danil; Alonso, Eduardo

    2014-10-01

    In adaptive dynamic programming, neurocontrol, and reinforcement learning, the objective is for an agent to learn to choose actions so as to minimize a total cost function. In this paper, we show that when discretized time is used to model the motion of the agent, it can be very important to do clipping on the motion of the agent in the final time step of the trajectory. By clipping, we mean that the final time step of the trajectory is to be truncated such that the agent stops exactly at the first terminal state reached, and no distance further. We demonstrate that when clipping is omitted, learning performance can fail to reach the optimum, and when clipping is done properly, learning performance can improve significantly. The clipping problem we describe affects algorithms that use explicit derivatives of the model functions of the environment to calculate a learning gradient. These include backpropagation through time for control and methods based on dual heuristic programming. However, the clipping problem does not significantly affect methods based on heuristic dynamic programming, temporal differences learning, or policy-gradient learning algorithms.

  7. Stressless Schwarzschild

    NASA Astrophysics Data System (ADS)

    Deser, S.

    2014-01-01

    This self-contained pedagogical simple explicit 6-step derivation of the Schwarzschild solution, in "" formulation and conformal spatial gauge, (almost) avoids all affinity, curvature and index gymnastics.

  8. Regionally Implicit Discontinuous Galerkin Methods for Solving the Relativistic Vlasov-Maxwell System Submitted to Iowa State University

    NASA Astrophysics Data System (ADS)

    Guthrey, Pierson Tyler

    The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.

  9. Explaining the Relationship Between Sexually Explicit Internet Material and Casual Sex: A Two-Step Mediation Model.

    PubMed

    Vandenbosch, Laura; van Oosten, Johanna M F

    2018-07-01

    Despite increasing interest in the implications of adolescents' use of sexually explicit Internet material (SEIM), we still know little about the relationship between SEIM use and adolescents' casual sexual activities. Based on a three-wave online panel survey study among Dutch adolescents (N = 1079; 53.1% boys; 93.5% with an exclusively heterosexual orientation; M age  = 15.11; SD = 1.39), we found that watching SEIM predicted engagement in casual sex over time. In turn, casual sexual activities partially predicted adolescents' use of SEIM. A two-step mediation model was tested to explain the relationship between watching SEIM and casual sex. It was partially confirmed. First, watching SEIM predicted adolescents' perceptions of SEIM as a relevant information source from Wave 2 to Wave 3, but not from Wave 1 to Wave 2. Next, such perceived utility of SEIM was positively related to stronger instrumental attitudes toward sex and thus their views about sex as a core instrument for sexual gratification. Lastly, adolescents' instrumental attitudes toward sex predicted adolescents' engagement in casual sex activities consistently across waves. Partial support emerged for a reciprocal relationship between watching SEIM and perceived utility. We did not find a reverse relationship between casual sex activities and instrumental attitudes toward sex. No significant gender differences emerged.

  10. A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.

    1989-01-01

    A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.

  11. Time-fractional Cahn-Allen and time-fractional Klein-Gordon equations: Lie symmetry analysis, explicit solutions and convergence analysis

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru

    2018-03-01

    This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.

  12. Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Cheng, Jiahao

    2018-02-01

    Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.

  13. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    NASA Astrophysics Data System (ADS)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  14. Explicit crystal host effects on excited state properties of linear polyacenes: towards a room-temperature maser

    NASA Astrophysics Data System (ADS)

    Charlton, Robert; Bogatko, Stuart; Zuehlsdorff, Tim; Hine, Nicholas; Horsfield, Andrew; Haynes, Peter

    Maser technology has been held back for decades by the impracticality of the operating conditions of traditional masing devices, such as cryogenic freezing and strong magnetic fields. Recently it has been experimentally demonstrated that pentacene in p-terphenyl can act as a viable solid-state room-temperature maser by exploiting the alignment of the low-lying singlet and triplet excited states of pentacene. To understand the operation of this device from first principles, an ab initio study of the excitonic properties of pentacene in p-terphenyl has been carried out using time-dependent density functional theory (TDDFT), implemented in the linear-scaling ONETEP software (www.onetep.org). In particular, we focus on the impact that the wider crystal has on the localised pentacene excitations by performing an explicit DFT treatment of the p-terphenyl environment. We demonstrate the importance of explicit crystal host effects in calculating the excitation energies of pentacene in p-terphenyl, providing important information for the operation of the maser. We then use this same approach to test the viability of other linear polyacenes as maser candidates as a screening step before experimental testing.

  15. A lab-controlled simulation of a letter-speech sound binding deficit in dyslexia.

    PubMed

    Aravena, Sebastián; Snellings, Patrick; Tijms, Jurgen; van der Molen, Maurits W

    2013-08-01

    Dyslexic and non-dyslexic readers engaged in a short training aimed at learning eight basic letter-speech sound correspondences within an artificial orthography. We examined whether a letter-speech sound binding deficit is behaviorally detectable within the initial steps of learning a novel script. Both letter knowledge and word reading ability within the artificial script were assessed. An additional goal was to investigate the influence of instructional approach on the initial learning of letter-speech sound correspondences. We assigned children from both groups to one of three different training conditions: (a) explicit instruction, (b) implicit associative learning within a computer game environment, or (c) a combination of (a) and (b) in which explicit instruction is followed by implicit learning. Our results indicated that dyslexics were outperformed by the controls on a time-pressured binding task and a word reading task within the artificial orthography, providing empirical support for the view that a letter-speech sound binding deficit is a key factor in dyslexia. A combination of explicit instruction and implicit techniques proved to be a more powerful tool in the initial teaching of letter-sound correspondences than implicit training alone. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Relativistic Positioning Systems and Gravitational Perturbations

    NASA Astrophysics Data System (ADS)

    Gomboc, Andreja; Kostić, Uroš; Horvat, Martin; Carloni, Sante; Delva, Pacôme

    2013-11-01

    In order to deliver a high accuracy relativistic positioning system, several gravitational perturbations need to be taken into account. We therefore consider a system of satellites, such as the Galileo system, in a space-time described by a background Schwarzschild metric and small gravitational perturbations due to the Earth’s rotation, multipoles and tides, and the gravity of the Moon, the Sun, and planets. We present the status of this work currently carried out in the ESA Slovenian PECS project Relativistic Global Navigation System, give the explicit expressions for the perturbed metric, and briefly outline further steps.

  18. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  19. Simulating Free Surface Flows with SPH

    NASA Astrophysics Data System (ADS)

    Monaghan, J. J.

    1994-02-01

    The SPH (smoothed particle hydrodynamics) method is extended to deal with free surface incompressible flows. The method is easy to use, and examples will be given of its application to a breaking dam, a bore, the simulation of a wave maker, and the propagation of waves towards a beach. Arbitrary moving boundaries can be included by modelling the boundaries by particles which repel the fluid particles. The method is explicit, and the time steps are therefore much shorter than required by other less flexible methods, but it is robust and easy to program.

  20. Detailed analysis of grid-based molecular docking: A case study of CDOCKER-A CHARMm-based MD docking algorithm.

    PubMed

    Wu, Guosheng; Robertson, Daniel H; Brooks, Charles L; Vieth, Michal

    2003-10-01

    The influence of various factors on the accuracy of protein-ligand docking is examined. The factors investigated include the role of a grid representation of protein-ligand interactions, the initial ligand conformation and orientation, the sampling rate of the energy hyper-surface, and the final minimization. A representative docking method is used to study these factors, namely, CDOCKER, a molecular dynamics (MD) simulated-annealing-based algorithm. A major emphasis in these studies is to compare the relative performance and accuracy of various grid-based approximations to explicit all-atom force field calculations. In these docking studies, the protein is kept rigid while the ligands are treated as fully flexible and a final minimization step is used to refine the docked poses. A docking success rate of 74% is observed when an explicit all-atom representation of the protein (full force field) is used, while a lower accuracy of 66-76% is observed for grid-based methods. All docking experiments considered a 41-member protein-ligand validation set. A significant improvement in accuracy (76 vs. 66%) for the grid-based docking is achieved if the explicit all-atom force field is used in a final minimization step to refine the docking poses. Statistical analysis shows that even lower-accuracy grid-based energy representations can be effectively used when followed with full force field minimization. The results of these grid-based protocols are statistically indistinguishable from the detailed atomic dockings and provide up to a sixfold reduction in computation time. For the test case examined here, improving the docking accuracy did not necessarily enhance the ability to estimate binding affinities using the docked structures. Copyright 2003 Wiley Periodicals, Inc.

  1. The importance of explicitly mapping instructional analogies in science education

    NASA Astrophysics Data System (ADS)

    Asay, Loretta Johnson

    Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.

  2. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  3. Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics

    NASA Astrophysics Data System (ADS)

    d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.

    2018-05-01

    Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.

  4. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  5. Sensitivity of peptide conformational dynamics on clustering of a classical molecular dynamics trajectory

    NASA Astrophysics Data System (ADS)

    Jensen, Christian H.; Nerukh, Dmitry; Glen, Robert C.

    2008-03-01

    We investigate the sensitivity of a Markov model with states and transition probabilities obtained from clustering a molecular dynamics trajectory. We have examined a 500ns molecular dynamics trajectory of the peptide valine-proline-alanine-leucine in explicit water. The sensitivity is quantified by varying the boundaries of the clusters and investigating the resulting variation in transition probabilities and the average transition time between states. In this way, we represent the effect of clustering using different clustering algorithms. It is found that in terms of the investigated quantities, the peptide dynamics described by the Markov model is sensitive to the clustering; in particular, the average transition times are found to vary up to 46%. Moreover, inclusion of nonphysical sparsely populated clusters can lead to serious errors of up to 814%. In the investigation, the time step used in the transition matrix is determined by the minimum time scale on which the system behaves approximately Markovian. This time step is found to be about 100ps. It is concluded that the description of peptide dynamics with transition matrices should be performed with care, and that using standard clustering algorithms to obtain states and transition probabilities may not always produce reliable results.

  6. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  7. Compressible, multiphase semi-implicit method with moment of fluid interface representation

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Arienti, Marco

    2014-09-16

    A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less

  8. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  9. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  10. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  11. Slip Continuity in Explicit Crystal Plasticity Simulations Using Nonlocal Continuum and Semi-discrete Approaches

    DTIC Science & Technology

    2013-01-01

    Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity

  12. A new indicator framework for quantifying the intensity of the terrestrial water cycle

    NASA Astrophysics Data System (ADS)

    Huntington, Thomas G.; Weiskel, Peter K.; Wolock, David M.; McCabe, Gregory J.

    2018-04-01

    A quantitative framework for characterizing the intensity of the water cycle over land is presented, and illustrated using a spatially distributed water-balance model of the conterminous United States (CONUS). We approach water cycle intensity (WCI) from a landscape perspective; WCI is defined as the sum of precipitation (P) and actual evapotranspiration (AET) over a spatially explicit landscape unit of interest, averaged over a specified time period (step) of interest. The time step may be of any length for which data or simulation results are available (e.g., sub-daily to multi-decadal). We define the storage-adjusted runoff (Q‧) as the sum of actual runoff (Q) and the rate of change in soil moisture storage (ΔS/Δt, positive or negative) during the time step of interest. The Q‧ indicator is demonstrated to be mathematically complementary to WCI, in a manner that allows graphical interpretation of their relationship. For the purposes of this study, the indicators were demonstrated using long-term, spatially distributed model simulations with an annual time step. WCI was found to increase over most of the CONUS between the 1945 to 1974 and 1985 to 2014 periods, driven primarily by increases in P. In portions of the western and southeastern CONUS, Q‧ decreased because of decreases in Q and soil moisture storage. Analysis of WCI and Q‧ at temporal scales ranging from sub-daily to multi-decadal could improve understanding of the wide spectrum of hydrologic responses that have been attributed to water cycle intensification, as well as trends in those responses.

  13. A high-order semi-explicit discontinuous Galerkin solver for 3D incompressible flow with application to DNS and LES of turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Krank, Benjamin; Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-11-01

    We present an efficient discontinuous Galerkin scheme for simulation of the incompressible Navier-Stokes equations including laminar and turbulent flow. We consider a semi-explicit high-order velocity-correction method for time integration as well as nodal equal-order discretizations for velocity and pressure. The non-linear convective term is treated explicitly while a linear system is solved for the pressure Poisson equation and the viscous term. The key feature of our solver is a consistent penalty term reducing the local divergence error in order to overcome recently reported instabilities in spatially under-resolved high-Reynolds-number flows as well as small time steps. This penalty method is similar to the grad-div stabilization widely used in continuous finite elements. We further review and compare our method to several other techniques recently proposed in literature to stabilize the method for such flow configurations. The solver is specifically designed for large-scale computations through matrix-free linear solvers including efficient preconditioning strategies and tensor-product elements, which have allowed us to scale this code up to 34.4 billion degrees of freedom and 147,456 CPU cores. We validate our code and demonstrate optimal convergence rates with laminar flows present in a vortex problem and flow past a cylinder and show applicability of our solver to direct numerical simulation as well as implicit large-eddy simulation of turbulent channel flow at Reτ = 180 as well as 590.

  14. A FORTRAN program for calculating nonlinear seismic ground response

    USGS Publications Warehouse

    Joyner, William B.

    1977-01-01

    The program described here was designed for calculating the nonlinear seismic response of a system of horizontal soil layers underlain by a semi-infinite elastic medium representing bedrock. Excitation is a vertically incident shear wave in the underlying medium. The nonlinear hysteretic behavior of the soil is represented by a model consisting of simple linear springs and Coulomb friction elements arranged as shown. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. A brief program description is provided here with instructions for preparing the input and a source listing. A more detailed discussion of the method is presented elsewhere as is the description of a different program employing implicit integration.

  15. Subsystem real-time time dependent density functional theory.

    PubMed

    Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele

    2015-04-21

    We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.

  16. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  17. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  18. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  19. An unconditionally stable Runge-Kutta method for unsteady flows

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Chima, Rodrick V.

    1988-01-01

    A quasi-three dimensional analysis was developed for unsteady rotor-stator interaction in turbomachinery. The analysis solves the unsteady Euler or thin-layer Navier-Stokes equations in a body fitted coordinate system. It accounts for the effects of rotation, radius change, and stream surface thickness. The Baldwin-Lomax eddy viscosity model is used for turbulent flows. The equations are integrated in time using a four stage Runge-Kutta scheme with a constant time step. Implicit residual smoothing was employed to accelerate the solution of the time accurate computations. The scheme is described and accuracy analyses are given. Results are shown for a supersonic through-flow fan designed for NASA Lewis. The rotor:stator blade ratio was taken as 1:1. Results are also shown for the first stage of the Space Shuttle Main Engine high pressure fuel turbopump. Here the blade ratio is 2:3. Implicit residual smoothing was used to increase the time step limit of the unsmoothed scheme by a factor of six with negligible differences in the unsteady results. It is felt that the implicitly smoothed Runge-Kutta scheme is easily competitive with implicit schemes for unsteady flows while retaining the simplicity of an explicit scheme.

  20. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1991-01-01

    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  1. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE PAGES

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    2016-04-08

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  2. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  3. First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet

    NASA Astrophysics Data System (ADS)

    Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan

    2017-04-01

    The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.

  4. Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia

    PubMed Central

    Paulmann, Silke; Ott, Derek V. M.; Kotz, Sonja A.

    2011-01-01

    The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages. PMID:21437277

  5. A numerical investigation of viscous, incompressible flow past an axisymmetric body with and without spin

    NASA Astrophysics Data System (ADS)

    Weber, K. F.

    1985-12-01

    This study deals with a preliminary investigation of the effects of spin on the axisymmetric flow past a body of revolution. The study has its genesis larger problem of Magnus forces on spinning bodies at angle of attack. However, the fundamental behavior that arises when a spinning body is placed in a uniform stream is still not well understood; therefore, the problem of axisymmetric flow with spin was undertaken. The body consists of a 3-caliber cant-ogive blunted by a spherical nosecap, a 2-caliber cylindrical section, and a 1-caliber boattail. Numerical solutions of the compressible laminar Navier-Stokes equations are obtained using a modified version of the implicit-explicit method developed by MacCormack in 1981. The benchmark problem is the nonspinning body in uniform flow at a Reynolds number of 1.14. The results show that the modified method performs well and allows time steps that are in order of magnitude greater than those permitted by explicit stability criteria.

  6. A multiblock multigrid three-dimensional Euler equation solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Elmiligui, Alaa; Melson, N. Duane; Vonlavante, E.

    1990-01-01

    Current aerodynamic designs are often quite complex (geometrically). Flexible computational tools are needed for the analysis of a wide range of configurations with both internal and external flows. In the past, geometrically dissimilar configurations required different analysis codes with different grid topologies in each. The duplicity of codes can be avoided with the use of a general multiblock formulation which can handle any grid topology. Rather than hard wiring the grid topology into the program, it is instead dictated by input to the program. In this work, the compressible Euler equations, written in a body-fitted finite-volume formulation, are solved using a pseudo-time-marching approach. Two upwind methods (van Leer's flux-vector-splitting and Roe's flux-differencing) were investigated. Two types of explicit solvers (a two-step predictor-corrector and a modified multistage Runge-Kutta) were used with multigrid acceleration to enhance convergence. A multiblock strategy is used to allow greater geometric flexibility. A report on simple explicit upwind schemes for solving compressible flows is included.

  7. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  8. Scattering matrices of Lamb waves at irregular surface and void defects.

    PubMed

    Feng, Feilong; Shen, Jianzhong; Lin, Shuyu

    2012-08-01

    Time-harmonic solution of Lamb wave scattering in a plane-strain waveguide with irregular thickness is investigated based on stair-step discretization and stepwise mode matching. The transfer relations of the transmission matrices and reflection matrices are derived in both directions of the waveguide. With these, an explicit expression of the scattering matrix is derived. When the scattering region of an inner irregular defect is geometrically divided into several parts composed of sub-waveguides with variable thicknesses and void regions with vertical free edges corresponding to the plate surfaces, the scattering matrix of the whole region could then be derived by modal matching along the artificial boundaries, as explicit functions of all the scattering matrices of the sub-waveguides and reflection matrices of the free edges. The effectiveness of the formulation is examined by numerical examples; the calculated scattering coefficients are in good accordance with those obtained from numerical simulation models. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bomble, Yannick J; St. John, Peter C; Crowley, Michael F

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  10. Time-Extended Policies in Mult-Agent Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2004-01-01

    Reinforcement learning methods perform well in many domains where a single agent needs to take a sequence of actions to perform a task. These methods use sequences of single-time-step rewards to create a policy that tries to maximize a time-extended utility, which is a (possibly discounted) sum of these rewards. In this paper we build on our previous work showing how these methods can be extended to a multi-agent environment where each agent creates its own policy that works towards maximizing a time-extended global utility over all agents actions. We show improved methods for creating time-extended utilities for the agents that are both "aligned" with the global utility and "learnable." We then show how to crate single-time-step rewards while avoiding the pi fall of having rewards aligned with the global reward leading to utilities not aligned with the global utility. Finally, we apply these reward functions to the multi-agent Gridworld problem. We explicitly quantify a utility's learnability and alignment, and show that reinforcement learning agents using the prescribed reward functions successfully tradeoff learnability and alignment. As a result they outperform both global (e.g., team games ) and local (e.g., "perfectly learnable" ) reinforcement learning solutions by as much as an order of magnitude.

  11. An L-stable method for solving stiff hydrodynamics

    NASA Astrophysics Data System (ADS)

    Li, Shengtai

    2017-07-01

    We develop a new method for simulating the coupled dynamics of gas and multi-species dust grains. The dust grains are treated as pressure-less fluids and their coupling with gas is through stiff drag terms. If an explicit method is used, the numerical time step is subject to the stopping time of the dust particles, which can become extremely small for small grains. The previous semi-implicit method [1] uses second-order trapezoidal rule (TR) on the stiff drag terms and it works only for moderately small size of the dust particles. This is because TR method is only A-stable not L-stable. In this work, we use TR-BDF2 method [2] for the stiff terms in the coupled hydrodynamic equations. The L-stability of TR-BDF2 proves essential in treating a number of dust species. The combination of TR-BDF2 method with the explicit discretization of other hydro terms can solve a wide variety of stiff hydrodynamics equations accurately and efficiently. We have implemented our method in our LA-COMPASS (Los Alamos Computational Astrophysics Suite) package. We have applied the code to simulate some dusty proto-planetary disks and obtained very good match with astronomical observations.

  12. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  13. Numerical experiments with a symmetric high-resolution shock-capturing scheme

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1986-01-01

    Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.

  14. Incorporating evolution of transcription factor binding sites into annotated alignments.

    PubMed

    Bais, Abha S; Grossmann, Stefen; Vingron, Martin

    2007-08-01

    Identifying transcription factor binding sites (TFBSs) is essential to elucidate putative regulatory mechanisms. A common strategy is to combine cross-species conservation with single sequence TFBS annotation to yield "conserved TFBSs". Most current methods in this field adopt a multi-step approach that segregates the two aspects. Again, it is widely accepted that the evolutionary dynamics of binding sites differ from those of the surrounding sequence. Hence, it is desirable to have an approach that explicitly takes this factor into account. Although a plethora of approaches have been proposed for the prediction of conserved TFBSs, very few explicitly model TFBS evolutionary properties, while additionally being multi-step. Recently, we introduced a novel approach to simultaneously align and annotate conserved TFBSs in a pair of sequences. Building upon the standard Smith-Waterman algorithm for local alignments, SimAnn introduces additional states for profiles to output extended alignments or annotated alignments. That is, alignments with parts annotated as gaplessly aligned TFBSs (pair-profile hits)are generated. Moreover,the pair- profile related parameters are derived in a sound statistical framework. In this article, we extend this approach to explicitly incorporate evolution of binding sites in the SimAnn framework. We demonstrate the extension in the theoretical derivations through two position-specific evolutionary models, previously used for modelling TFBS evolution. In a simulated setting, we provide a proof of concept that the approach works given the underlying assumptions,as compared to the original work. Finally, using a real dataset of experimentally verified binding sites in human-mouse sequence pairs,we compare the new approach (eSimAnn) to an existing multi-step tool that also considers TFBS evolution. Although it is widely accepted that binding sites evolve differently from the surrounding sequences, most comparative TFBS identification methods do not explicitly consider this.Additionally, prediction of conserved binding sites is carried out in a multi-step approach that segregates alignment from TFBS annotation. In this paper, we demonstrate how the simultaneous alignment and annotation approach of SimAnn can be further extended to incorporate TFBS evolutionary relationships. We study how alignments and binding site predictions interplay at varying evolutionary distances and for various profile qualities.

  15. Ageostrophic winds in the severe strom environment

    NASA Technical Reports Server (NTRS)

    Moore, J. T.

    1982-01-01

    The period from 1200 GMT 10 April to 0000 GMT 11 April 1979, during which time several major tornadoes and severe thunderstorms, including the Wichita Falls tornado occurred was studied. A time adjusted, isentropic data set was used to analyze key parameters. Fourth order centered finite differences were used to compute the isallobaric, inertial advective, tendency, inertial advective geostrophic and ageostrophic winds. Explicit isentropic trajectories were computed through the isentropic, inviscid equations of motion using a 15 minute time step. Ageostrophic, geostrophic and total vertical motion fields were computed to judge the relative importance of ageostrophy in enhancing the vertical motion field. It is found that ageostrophy is symptomatic of those mass adjustments which take place during upper level jet streak propagation and can, in a favorable environment, act to increase and release potential instability over meso alpha time periods.

  16. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  17. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  18. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.

  19. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  20. End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.

    2012-01-01

    To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.

  1. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

    NASA Astrophysics Data System (ADS)

    Harmon, Michael; Gamba, Irene M.; Ren, Kui

    2016-12-01

    This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

  2. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  3. Studying the Global Bifurcation Involving Wada Boundary Metamorphosis by a Method of Generalized Cell Mapping with Sampling-Adaptive Interpolation

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng

    In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.

  4. Numerical investigation of implementation of air-earth boundary by acoustic-elastic boundary approach

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2007-01-01

    The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.

  5. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  6. Four decades of implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B.

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  7. Early stages of the recovery stroke in myosin II studied by molecular dynamics simulations

    PubMed Central

    Baumketner, Andrij; Nesmelov, Yuri

    2011-01-01

    The recovery stroke is a key step in the functional cycle of muscle motor protein myosin, during which pre-recovery conformation of the protein is changed into the active post-recovery conformation, ready to exersice force. We study the microscopic details of this transition using molecular dynamics simulations of atomistic models in implicit and explicit solvent. In more than 2 μs of aggregate simulation time, we uncover evidence that the recovery stroke is a two-step process consisting of two stages separated by a time delay. In our simulations, we directly observe the first stage at which switch II loop closes in the presence of adenosine triphosphate at the nucleotide binding site. The resulting configuration of the nucleotide binding site is identical to that detected experimentally. Distribution of inter-residue distances measured in the force generating region of myosin is in good agreement with the experimental data. The second stage of the recovery stroke structural transition, rotation of the converter domain, was not observed in our simulations. Apparently it occurs on a longer time scale. We suggest that the two parts of the recovery stroke need to be studied using separate computational models. PMID:21922589

  8. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  9. Well-balanced compressible cut-cell simulation of atmospheric flow.

    PubMed

    Klein, R; Bates, K R; Nikiforakis, N

    2009-11-28

    Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.

  10. Investigating the use of a rational Runge Kutta method for transport modelling

    NASA Astrophysics Data System (ADS)

    Dougherty, David E.

    An unconditionally stable explicit time integrator has recently been developed for parabolic systems of equations. This rational Runge Kutta (RRK) method, proposed by Wambecq 1 and Hairer 2, has been applied by Liu et al.3 to linear heat conduction problems in a time-partitioned solution context. An important practical question is whether the method has application for the solution of (nearly) hyperbolic equations as well. In this paper the RRK method is applied to a nonlinear heat conduction problem, the advection-diffusion equation, and the hyperbolic Buckley-Leverett problem. The method is, indeed, found to be unconditionally stable for the linear heat conduction problem and performs satisfactorily for the nonlinear heat flow case. A heuristic limitation on the utility of RRK for the advection-diffusion equation arises in the Courant number; for the second-order accurate one-step two-stage RRK method, a limiting Courant number of 2 applies. First order upwinding is not as effective when used with RRK as with Euler one-step methods. The method is found to perform poorly for the Buckley-Leverett problem.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less

  12. Toward transient finite element simulation of thermal deformation of machine tools in real-time

    NASA Astrophysics Data System (ADS)

    Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg

    2018-01-01

    Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.

  13. Fourier-Legendre spectral methods for incompressible channel flow

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Hussaini, M. Y.

    1984-01-01

    An iterative collocation technique is described for modeling implicit viscosity in three-dimensional incompressible wall bounded shear flow. The viscosity can vary temporally and in the vertical direction. Channel flow is modeled with a Fourier-Legendre approximation and the mean streamwise advection is treated implicitly. Explicit terms are handled with an Adams-Bashforth method to increase the allowable time-step for calculation of the implicit terms. The algorithm is applied to low amplitude unstable waves in a plane Poiseuille flow at an Re of 7500. Comparisons are made between results using the Legendre method and with Chebyshev polynomials. Comparable accuracy is obtained for the perturbation kinetic energy predicted using both discretizations.

  14. Finite-element approach to Brownian dynamics of polymers.

    PubMed

    Cyron, Christian J; Wall, Wolfgang A

    2009-12-01

    In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.

  15. Analysis of Preconditioning and Relaxation Operators for the Discontinuous Galerkin Method Applied to Diffusion

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Shu, Chi-Wang

    2001-01-01

    The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.

  16. A novel robot for imposing perturbations during overground walking: mechanism, control and normative stepping responses.

    PubMed

    Olenšek, Andrej; Zadravec, Matjaž; Matjačić, Zlatko

    2016-06-11

    The most common approach to studying dynamic balance during walking is by applying perturbations. Previous studies that investigated dynamic balance responses predominantly focused on applying perturbations in frontal plane while walking on treadmill. The goal of our work was to develop balance assessment robot (BAR) that can be used during overground walking and to assess normative balance responses to perturbations in transversal plane in a group of neurologically healthy individuals. BAR provides three passive degrees of freedom (DoF) and three actuated DoF in pelvis that are admittance-controlled in such a way that the natural movement of pelvis is not significantly affected. In this study BAR was used to assess normative balance responses in neurologically healthy individuals by applying linear perturbations in frontal and sagittal planes and angular perturbations in transversal plane of pelvis. One way repeated measure ANOVA was used to statistically evaluate the effect of selected perturbations on stepping responses. Standard deviations of assessed responses were similar in unperturbed and perturbed walking. Perturbations in frontal direction evoked substantial pelvis displacement and caused statistically significant effect on step length, step width and step time. Likewise, perturbations in sagittal plane also caused statistically significant effect on step length, step width and step time but with less explicit impact on pelvis movement in frontal plane. On the other hand, except from substantial pelvis rotation angular perturbations did not have substantial effect on pelvis movement in frontal and sagittal planes while statistically significant effect was noted only in step length and step width after perturbation in clockwise direction. Results indicate that the proposed device can repeatedly reproduce similar experimental conditions. Results also suggest that "stepping strategy" is the dominant strategy for coping with perturbations in frontal plane, perturbations in sagittal plane are to greater extent handled by "ankle strategy" while angular perturbations in transversal plane do not pose substantial challenge for balance. Results also show that specific perturbation in general elicits responses that extend also to other planes of movement that are not directly associated with plane of perturbation as well as to spatio temporal parameters of gait.

  17. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal or less than 300x300x300 nodes, and it under-samples the wavefield reducing the number of stored time-steps by an order of magnitude. For bigger models the wavefield is stored only at the boundaries of the model and then re-injected while the residuals are backpropagated allowing to compute the correlation 'on the fly'. In terms of computational resource, the elastic code is an order of magnitude more demanding than the equivalent acoustic code. We have combined shared memory with distributed memory parallelisation using OpenMP and MPI respectively. Thus, we take advantage of the increasingly common multi-core architecture processors. We have successfully applied our inversion algorithm to different realistic complex 3D models. The models had non-linear relations between pressure and shear wave velocities. The shorter wavelengths of the shear waves improve the resolution of the images obtained with respect to a purely acoustic approach.

  18. Expected values for pedometer-determined physical activity in older populations

    PubMed Central

    2009-01-01

    The purpose of this review is to update expected values for pedometer-determined physical activity in free-living healthy older populations. A search of the literature published since 2001 began with a keyword (pedometer, "step counter," "step activity monitor" or "accelerometer AND steps/day") search of PubMed, Cumulative Index to Nursing & Allied Health Literature (CINAHL), SportDiscus, and PsychInfo. An iterative process was then undertaken to abstract and verify studies of pedometer-determined physical activity (captured in terms of steps taken; distance only was not accepted) in free-living adult populations described as ≥ 50 years of age (studies that included samples which spanned this threshold were not included unless they provided at least some appropriately age-stratified data) and not specifically recruited based on any chronic disease or disability. We identified 28 studies representing at least 1,343 males and 3,098 females ranging in age from 50–94 years. Eighteen (or 64%) of the studies clearly identified using a Yamax pedometer model. Monitoring frames ranged from 3 days to 1 year; the modal length of time was 7 days (17 studies, or 61%). Mean pedometer-determined physical activity ranged from 2,015 steps/day to 8,938 steps/day. In those studies reporting such data, consistent patterns emerged: males generally took more steps/day than similarly aged females, steps/day decreased across study-specific age groupings, and BMI-defined normal weight individuals took more steps/day than overweight/obese older adults. The range of 2,000–9,000 steps/day likely reflects the true variability of physical activity behaviors in older populations. More explicit patterns, for example sex- and age-specific relationships, remain to be informed by future research endeavors. PMID:19706192

  19. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  20. On the continuum limit for a semidiscrete Hirota equation

    PubMed Central

    Pickering, Andrew; Zhao, Hai-qiong

    2016-01-01

    In this paper, we propose a new semidiscrete Hirota equation which yields the Hirota equation in the continuum limit. We focus on the topic of how the discrete space step δ affects the simulation for the soliton solution to the Hirota equation. The Darboux transformation and explicit solution for the semidiscrete Hirota equation are constructed. We show that the continuum limit for the semidiscrete Hirota equation, including the Lax pair, the Darboux transformation and the explicit solution, yields the corresponding results for the Hirota equation as δ→0. PMID:27956884

  1. A method for modeling oxygen diffusion in an agent-based model with application to host-pathogen infection

    DOE PAGES

    Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.

    2015-01-01

    This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less

  2. Adaptive form-finding method for form-fixed spatial network structures

    NASA Astrophysics Data System (ADS)

    Lan, Cheng; Tu, Xi; Xue, Junqing; Briseghella, Bruno; Zordan, Tobia

    2018-02-01

    An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.

  3. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  4. When less is more - Implicit preference for incomplete bodies in xenomelia.

    PubMed

    Macauda, Gianluca; Bekrater-Bodmann, Robin; Brugger, Peter; Lenggenhager, Bigna

    2017-01-01

    Individuals with xenomelia identify with an amputated rather than with their physically complete, healthy body. They often mimic amputees and show a strong admiration of and sexual attraction towards them. Here we investigated for the first time empirically whether such unusual preference for amputated bodies is present also on an implicit level. Using the well-validated Implicit Association Test we show that individuals with xenomelia manifested a stronger implicit and explicit preference for amputated bodies than a normally-limbed control group and a group of involuntary amputees did. Interestingly, the two latter groups did not differ in their implicit and explicit preference for complete versus amputated bodies. These findings are an important step in understanding how deeply rooted attitudes about a socially normative body appearance may be influenced by a developmentally disordered experience of one's own bodily self. We conclude that this is the first behavioral evidence demonstrating a conflict of self-identification on an implicit level and this enriches current understandings of xenomelia as a primarily neurological disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Ideas for Future GPS Timing Improvements

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity.

  6. A third-order implicit discontinuous Galerkin method based on a Hermite WENO reconstruction for time-accurate solution of the compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Liu, Xiaodong; Luo, Hong

    2015-06-01

    Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less

  7. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  8. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  9. Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution

    PubMed Central

    Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin

    2016-01-01

    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114

  10. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  11. On the numerical solution of the dynamically loaded hydrodynamic lubrication of the point contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.

    1990-01-01

    The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.

  12. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  13. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  14. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  15. Three-dimensional control of crystal growth using magnetic fields

    NASA Astrophysics Data System (ADS)

    Dulikravich, George S.; Ahuja, Vineet; Lee, Seungsoo

    1993-07-01

    Two coupled systems of partial differential equations governing three-dimensional laminar viscous flow undergoing solidification or melting under the influence of arbitrarily oriented externally applied magnetic fields have been formulated. The model accounts for arbitrary temperature dependence of physical properties including latent heat release, effects of Joule heating, magnetic field forces, and mushy region existence. On the basis of this model a numerical algorithm has been developed and implemented using central differencing on a curvilinear boundary-conforming grid and Runge-Kutta explicit time-stepping. The numerical results clearly demonstrate possibilities for active and practically instantaneous control of melt/solid interface shape, the solidification/melting front propagation speed, and the amount and location of solid accrued.

  16. Bell - Kochen - Specker theorem for any finite dimension ?

    NASA Astrophysics Data System (ADS)

    Cabello, Adán; García-Alcaine, Guillermo

    1996-03-01

    The Bell - Kochen - Specker theorem against non-contextual hidden variables can be proved by constructing a finite set of `totally non-colourable' directions, as Kochen and Specker did in a Hilbert space of dimension n = 3. We generalize Kochen and Specker's set to Hilbert spaces of any finite dimension 0305-4470/29/5/016/img2, in a three-step process that shows the relationship between different kinds of proofs (`continuum', `probabilistic', `state-specific' and `state-independent') of the Bell - Kochen - Specker theorem. At the same time, this construction of a totally non-colourable set of directions in any dimension explicitly solves the question raised by Zimba and Penrose about the existence of such a set for n = 5.

  17. Multigrid schemes for viscous hypersonic flows

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.

    1993-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving two different hypersonic flow problems. Some new multigrid schemes, based on semicoarsening strategies, are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6).

  18. Accurate solutions for transonic viscous flow over finite wings

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.

    1986-01-01

    An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.

  19. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  20. Calculation of viscous effects on transonic flow for oscillating airfoils and comparisons with experiment

    NASA Technical Reports Server (NTRS)

    Howlett, James T.; Bland, Samuel R.

    1987-01-01

    A method is described for calculating unsteady transonic flow with viscous interaction by coupling a steady integral boundary-layer code with an unsteady, transonic, inviscid small-disturbance computer code in a quasi-steady fashion. Explicit coupling of the equations together with viscous -inviscid iterations at each time step yield converged solutions with computer times about double those required to obtain inviscid solutions. The accuracy and range of applicability of the method are investigated by applying it to four AGARD standard airfoils. The first-harmonic components of both the unsteady pressure distributions and the lift and moment coefficients have been calculated. Comparisons with inviscid calcualtions and experimental data are presented. The results demonstrate that accurate solutions for transonic flows with viscous effects can be obtained for flows involving moderate-strength shock waves.

  1. Efficient and accurate numerical schemes for a hydro-dynamically coupled phase field diblock copolymer model

    NASA Astrophysics Data System (ADS)

    Cheng, Qing; Yang, Xiaofeng; Shen, Jie

    2017-07-01

    In this paper, we consider numerical approximations of a hydro-dynamically coupled phase field diblock copolymer model, in which the free energy contains a kinetic potential, a gradient entropy, a Ginzburg-Landau double well potential, and a long range nonlocal type potential. We develop a set of second order time marching schemes for this system using the "Invariant Energy Quadratization" approach for the double well potential, the projection method for the Navier-Stokes equation, and a subtle implicit-explicit treatment for the stress and convective term. The resulting schemes are linear and lead to symmetric positive definite systems at each time step, thus they can be efficiently solved. We further prove that these schemes are unconditionally energy stable. Various numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.

  2. A vectorized code for calculating laminar and turbulent hypersonic flows about blunt axisymmetric bodies at zero and small angles of attack

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.

    1980-01-01

    A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.

  3. Implicit and explicit motor sequence learning in children born very preterm.

    PubMed

    Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steiner, K; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G

    2017-01-01

    Motor skills can be learned explicitly (dependent on working memory (WM)) or implicitly (relatively independent of WM). Children born very preterm (VPT) often have working memory deficits. Explicit learning may be compromised in these children. This study investigated implicit and explicit motor learning and the role of working memory in VPT children and controls. Three groups (6-9 years) participated: 20 VPT children with motor problems, 20 VPT children without motor problems, and 20 controls. A nine button sequence was learned implicitly (pressing the lighted button as quickly as possible) and explicitly (discovering the sequence via trial-and-error). Children learned implicitly and explicitly, evidenced by decreased movement duration of the sequence over time. In the explicit condition, children also reduced the number of errors over time. Controls made more errors than VPT children without motor problems. Visual WM had positive effects on both explicit and implicit performance. VPT birth and low motor proficiency did not negatively affect implicit or explicit learning. Visual WM was positively related to both implicit and explicit performance, but did not influence learning curves. These findings question the theoretical difference between implicit and explicit learning and the proposed role of visual WM therein. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Alternating direction implicit methods for parabolic equations with a mixed derivative

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1980-01-01

    Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A(0)-stable linear two-step methods in conjunction with the method of approximate factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A(0)-stable method. The parameter space for which the resulting ADI schemes are second-order accurate and unconditionally stable is determined. Some numerical examples are given.

  5. Alternating direction implicit methods for parabolic equations with a mixed derivative

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1979-01-01

    Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A sub 0-stable linear two-step methods in conjunction with the method of approximation factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A sub 0-stable method. The parameter space for which the resulting ADI schemes are second order accurate and unconditionally stable is determined. Some numerical examples are given.

  6. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    PubMed

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  8. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  9. A GPU-accelerated implicit meshless method for compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  10. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  11. Sampling the multiple folding mechanisms of Trp-cage in explicit solvent

    PubMed Central

    Juraszek, J.; Bolhuis, P. G.

    2006-01-01

    We investigate the kinetic pathways of folding and unfolding of the designed miniprotein Trp- cage in explicit solvent. Straightforward molecular dynamics and replica exchange methods both have severe convergence problems, whereas transition path sampling allows us to sample unbiased dynamical pathways between folded and unfolded states and leads to deeper understanding of the mechanisms of (un)folding. In contrast to previous predictions employing an implicit solvent, we find that Trp-cage folds primarily (80% of the paths) via a pathway forming the tertiary contacts and the salt bridge, before helix formation. The remaining 20% of the paths occur in the opposite order, by first forming the helix. The transition states of the rate-limiting steps are solvated native-like structures. Water expulsion is found to be the last step upon folding for each route. Committor analysis suggests that the dynamics of the solvent is not part of the reaction coordinate. Nevertheless, during the transition, specific water molecules are strongly bound and can play a structural role in the folding. PMID:17035504

  12. Community detection using Kernel Spectral Clustering with memory

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Suykens, Johan A. K.

    2013-02-01

    This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  14. Numerical simulation of the kinetic effects in the solar wind

    NASA Astrophysics Data System (ADS)

    Sokolov, I.; Toth, G.; Gombosi, T. I.

    2017-12-01

    Global numerical simulations of the solar wind are usually based on the ideal or resistive MagnetoHydroDynamics (MHD) equations. Within a framework of MHD the electric field is assumed to vanish in the co-moving frame of reference (ideal MHD) or to obey a simple and non-physical scalar Ohm's law (resistive MHD). The Maxwellian distribution functions are assumed, the electron and ion temperatures may be different. Non-disversive MHD waves can be present in this numerical model. The averaged equations for MHD turbulence may be included as well as the energy and momentum exchange between the turbulent and regular motion. With the use of explicit numerical scheme, the time step is controlled by the MHD wave propagtion time across the numerical cell (the CFL condition) More refined approach includes the Hall effect vie the generalized Ohm's law. The Lorentz force acting on light electrons is assumed to vanish, which gives the expression for local electric field in terms of the total electric current, the ion current as well as the electron pressure gradient and magnetic field. The waves (whistlers, ion-cyclotron waves etc) aquire dispersion and the short-wavelength perturbations propagate with elevated speed thus strengthening the CFL condition. If the grid size is sufficiently small to resolve ion skindepth scale, then the timestep is much shorter than the ion gyration period. The next natural step is to use hybrid code to resolve the ion kinetic effects. The hybrid numerical scheme employs the same generalized Ohm's law as Hall MHD and suffers from the same constraint on the time step while solving evolution of the electromagnetic field. The important distiction, however, is that by sloving particle motion for ions we can achieve more detailed description of the kinetic effect without significant degrade in the computational efficiency, because the time-step is sufficient to resolve the particle gyration. We present the fisrt numerical results from coupled BATS-R-US+ALTOR code as applied to kinetic simulations of the solar wind.

  15. Global phenomena from local rules: Peer-to-peer networks and crystal steps

    NASA Astrophysics Data System (ADS)

    Finkbiner, Amy

    Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.

  16. Novel application of explicit dynamics occupancy models to ongoing aquatic invasions

    USGS Publications Warehouse

    Sepulveda, Adam J.

    2018-01-01

    Identification of suitable habitats, where invasive species can establish, is an important step towards controlling their spread. Accurate identification is difficult for new or slow invaders because unoccupied habitats may be suitable, given enough time for dispersal, while occupied habitats may prove to be unsuitable for establishment.To identify the suitable habitat of a recent invader, I used an explicit dynamics occupancy modelling framework to evaluate habitat covariates related to successful and failed establishments of American bullfrogs (Lithobates catesbeianus) within the Yellowstone River floodplain of Montana, USA from 2012-2016.During this five-year period, bullfrogs failed to establish at most sites they colonized. Bullfrog establishment was most likely to occur and least likely to fail at sites closest to human-modified ponds and lakes and those with emergent vegetation. These habitat covariates were generally associated with the presence of permanent water.Suitable habitat for bullfrog establishment is abundant in the Yellowstone River floodplain, though many sites with suitable habitat remain uncolonized. Thus, the maximum distribution of bullfrogs is much greater than their current distribution.Synthesis and applications. Focused control efforts on habitats with or proximate to permanent waters are most likely to reduce the potential for invasive bullfrog establishment and spread in the Yellowstone River. The novel application of explicit dynamics occupancy models is a useful and widely applicable tool for guiding management efforts towards those habitats where new or slow invaders are most likely to establish and persist.

  17. Asynchronously Coupled Models of Ice Loss from Airless Planetary Bodies

    NASA Astrophysics Data System (ADS)

    Schorghofer, N.

    2016-12-01

    Ice is found near the surface of dwarf planet Ceres, in some main belt asteroids, and perhaps in NEOs that will be explored or even mined in future. The simple but important question of how fast ice is lost from airless bodies can present computational challenges. The thermal cycle on the surface repeats on much shorter time-scales than ice retreats; one process acts on the time-scale of hours, the other over billions of years. This multi-scale situation is addressed with asynchronous coupling, where models with different time steps are woven together. The sharp contrast at the retreating ice table is dealt with with explicit interface tracking. For Ceres, which is covered with a thermally insulating dust mantle, desiccation rates are orders of magnitude slower than had been calculated with simpler models. More model challenges remain: The role of impact devolatization and the time-scale for complete desiccation of an asteroid. I will also share my experience with code distribution using GitHub and Zenodo.

  18. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    PubMed Central

    2010-01-01

    Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129

  19. Cellular Automata and the Humanities.

    ERIC Educational Resources Information Center

    Gallo, Ernest

    1994-01-01

    The use of cellular automata to analyze several pre-Socratic hypotheses about the evolution of the physical world is discussed. These hypotheses combine characteristics of both rigorous and metaphoric language. Since the computer demands explicit instructions for each step in the evolution of the automaton, such models can reveal conceptual…

  20. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  1. Improving efficiency and safety in external beam radiation therapy treatment delivery using a Kaizen approach.

    PubMed

    Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis

    Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  2. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  3. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  4. An epidemic model for the interactions between thermal regime of rivers and transmission of Proliferative Kidney Disease in salmonid fish

    NASA Astrophysics Data System (ADS)

    Carraro, Luca; Bertuzzo, Enrico; Mari, Lorenzo; Gatto, Marino; Strepparava, Nicole; Hartikainen, Hanna; Rinaldo, Andrea

    2015-04-01

    Proliferative kidney disease (PKD) affects salmonid populations in European and North-American rivers. It is caused by the endoparasitic myxozoan Tetracapsuloides bryosalmonae, which exploits freshwater bryozoans (Fredericella sultana) and salmonids as primary and secondary hosts, respectively. Incidence and mortality, which can reach up to 90-100%, are known to be strongly related to water temperature. PKD has been present in brown trout population for a long time but has recently increased rapidly in incidence and severity causing a decline in fish catches in many countries. In addition, environmental changes are feared to cause PKD outbreaks at higher latitude and altitude regions as warmer temperatures promote disease development. This calls for a better comprehension of the interactions between disease dynamics and the thermal regime of rivers, in order to possibly devise strategies for disease management. In this perspective, a spatially explicit model of PKD epidemiology in riverine host metacommunities is proposed. The model aims at summarizing the knowledge on the modes of transmission of the disease and the life-cycle of the parasite, making the connection between temperature and epidemiological parameters explicit. The model accounts for both local population and disease dynamics of bryozoans and fish and hydrodynamic dispersion of the parasite spores and hosts along the river network. The model is time-hybrid, coupling inter-seasonal and intra-seasonal dynamics, the former being described in a continuous time domain, the latter seen as time steps of a discrete time domain. In order to test the model, a case study is conducted in river Wigger (Cantons of Aargau and Lucerne, Switzerland), where data about water temperature, brown trout and bryozoan populations and PKD prevalence are being collected.

  5. Making the morally relevant features explicit: a response to Carson Strong.

    PubMed

    Gert, Bernard

    2006-03-01

    Carson Strong criticizes the application of my moral theory to bioethics cases. Some of his criticisms are due to my failure to make explicit that both the irrationality or rationality of a decision and the irrationality or rationality of the ranking of evils are part of morally relevant feature 3. Other criticisms are the result of his not using the two-step procedure in a sufficiently rigorous way. His claim that I come up with a wrong answer depends upon his incorrectly regarding a weakly justified violation as one that all impartial rational persons would agree was permitted, rather than as one about which rational persons disagree.

  6. Adaptive steganography

    NASA Astrophysics Data System (ADS)

    Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.

    2002-04-01

    Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.

  7. [Addictions: Motivated or forced care].

    PubMed

    Cottencin, Olivier; Bence, Camille

    2016-12-01

    Patients presenting with addictions are often obliged to consult. This constraint can be explicit (partner, children, parents, doctor, police, justice) or can be implicit (for their children, for their families, or for their health). Thus, beyond the fact that the caregiver faces the paradox of caring for subjects who do not ask treatment, he faces as well a double bind considered to be supporter of the social order or helper of patients. The transtheoretical model of change is complex showing us that it was neither fixed in time, nor perpetual for a given individual. This model includes ambivalence, resistance and even relapse, but it still considers constraint as a brake than an effective tool. Therapist must have adequate communication tools to enable everyone (forced or not) understand that involvement in care will enable him/her to regain his free will, even though it took to go through coercion. We propose in this article to detail the first steps with the patient presenting with addiction looking for constraint (implicit or explicit), how to work with constraint, avoid making resistances ourselves and make of constraint a powerful motivator for change. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  8. A Global Magnetohydrodynamic Model of Jovian Magnetosphere

    NASA Technical Reports Server (NTRS)

    Walker, Raymond J.; Sharber, James (Technical Monitor)

    2001-01-01

    The goal of this project was to develop a new global magnetohydrodynamic model of the interaction of the Jovian magnetosphere with the solar wind. Observations from 28 orbits of Jupiter by Galileo along with those from previous spacecraft at Jupiter, Pioneer 10 and 11, Voyager I and 2 and Ulysses, have revealed that the Jovian magnetosphere is a vast, complicated system. The Jovian aurora also has been monitored for several years. Like auroral observations at Earth, these measurements provide us with a global picture of magnetospheric dynamics. Despite this wide range of observations, we have limited quantitative understanding of the Jovian magnetosphere and how it interacts with the solar wind. For the past several years we have been working toward a quantitative understanding of the Jovian magnetosphere and its interaction with the solar wind by employing global magnetohydrodynamic simulations to model the magnetosphere. Our model has been an explicit MHD code (previously used to model the Earth's magnetosphere) to study Jupiter's magnetosphere. We continue to obtain important insights with this code, but it suffers from some severe limitations. In particular with this code we are limited to considering the region outside of 15RJ, with cell sizes of about 1.5R(sub J). The problem arises because of the presence of widely separated time scales throughout the magnetosphere. The numerical stability criterion for explicit MHD codes is the CFL limit and is given by C(sub max)(Delta)t/(Delta)x less than 1 where C(sub max) is the maximum group velocity in a given cell, (Delta)x is the grid spacing and (Delta)t is the time step. If the maximum wave velocity is C(sub w) and the flow speed is C(sub f), C(sub max) = C(sub w) + C(sub f). Near Jupiter the Alfven wave speed becomes very large (it approaches the speed of light at one Jovian radius). Operating with this time step makes the calculation essentially intractable. Therefore under this funding we have been designing a new MHD model that will be able to compute solutions in the wide parameter regime of the Jovian magnetosphere.

  9. Benchmarks of Historical Thinking: First Steps

    ERIC Educational Resources Information Center

    Peck, Carla; Seixas, Peter

    2008-01-01

    Although historical thinking has been the subject of a substantial body of recent research, few attempts explicitly apply the results on a large scale in North America. This article, a narrative inquiry, examines the first stages of a multi-year, Canada-wide project to reform history education through the development of classroom-based…

  10. Multiscale time-dependent density functional theory: Demonstration for plasmons.

    PubMed

    Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J

    2017-08-07

    Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.

  11. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mignone, A.; Tzeferacos, P.; Zanni, C.

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less

  12. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  13. Technical Note: Improving the VMERGE treatment planning algorithm for rotational radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaddy, Melissa R., E-mail: mrgaddy@ncsu.edu; Papp,

    2016-07-15

    Purpose: The authors revisit the VMERGE treatment planning algorithm by Craft et al. [“Multicriteria VMAT optimization,” Med. Phys. 39, 686–696 (2012)] for arc therapy planning and propose two changes to the method that are aimed at improving the achieved trade-off between treatment time and plan quality at little additional planning time cost, while retaining other desirable properties of the original algorithm. Methods: The original VMERGE algorithm first computes an “ideal,” high quality but also highly time consuming treatment plan that irradiates the patient from all possible angles in a fine angular grid with a highly modulated beam and then makesmore » this plan deliverable within practical treatment time by an iterative fluence map merging and sequencing algorithm. We propose two changes to this method. First, we regularize the ideal plan obtained in the first step by adding an explicit constraint on treatment time. Second, we propose a different merging criterion that comprises of identifying and merging adjacent maps whose merging results in the least degradation of radiation dose. Results: The effect of both suggested modifications is evaluated individually and jointly on clinical prostate and paraspinal cases. Details of the two cases are reported. Conclusions: In the authors’ computational study they found that both proposed modifications, especially the regularization, yield noticeably improved treatment plans for the same treatment times than what can be obtained using the original VMERGE method. The resulting plans match the quality of 20-beam step-and-shoot IMRT plans with a delivery time of approximately 2 min.« less

  14. Molecular modelling of protein-protein/protein-solvent interactions

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler

    The inner workings of individual cells are based on intricate networks of protein-protein interactions. However, each of these individual protein interactions requires a complex physical interaction between proteins and their aqueous environment at the atomic scale. In this thesis, molecular dynamics simulations are used in three theoretical studies to gain insight at the atomic scale about protein hydration, protein structure and tubulin-tubulin (protein-protein) interactions, as found in microtubules. Also presented, in a fourth project, is a molecular model of solvation coupled with the Amber molecular modelling package, to facilitate further studies without the need of explicitly modelled water. Basic properties of a minimally solvated protein were calculated through an extended study of myoglobin hydration with explicit solvent, directly investigating water and protein polarization. Results indicate a close correlation between polarization of both water and protein and the onset of protein function. The methodology of explicit solvent molecular dynamics was further used to study tubulin and microtubules. Extensive conformational sampling of the carboxy-terminal tails of 8-tubulin was performed via replica exchange molecular dynamics, allowing the characterisation of the flexibility, secondary structure and binding domains of the C-terminal tails through statistical analysis methods. Mechanical properties of tubulin and microtubules were calculated with adaptive biasing force molecular dynamics. The function of the M-loop in microtubule stability was demonstrated in these simulations. The flexibility of this loop allowed constant contacts between the protofilaments to be maintained during simulations while the smooth deformation provided a spring-like restoring force. Additionally, calculating the free energy profile between the straight and bent tubulin configurations was used to test the proposed conformational change in tubulin, thought to cause microtubule destabilization. No conformational change was observed but a nucleotide dependent 'softening' of the interaction was found instead, suggesting that an entropic force in a microtubule configuration could be the mechanism of microtubule collapse. Finally, to overcome much of the computational costs associated with explicit soIvent calculations, a new combination of molecular dynamics with the 3D-reference interaction site model (3D-RISM) of solvation was integrated into the Amber molecular dynamics package. Our implementation of 3D-RISM shows excellent agreement with explicit solvent free energy calculations. Several optimisation techniques, including a new multiple time step method, provide a nearly 100 fold performance increase, giving similar computational performance to explicit solvent.

  15. High order spectral volume and spectral difference methods on unstructured grids

    NASA Astrophysics Data System (ADS)

    Kannan, Ravishekar

    The spectral volume (SV) and the spectral difference (SD) methods were developed by Wang and Liu and their collaborators for conservation laws on unstructured grids. They were introduced to achieve high-order accuracy in an efficient manner. Recently, these methods were extended to three-dimensional systems and to the Navier Stokes equations. The simplicity and robustness of these methods have made them competitive against other higher order methods such as the discontinuous Galerkin and residual distribution methods. Although explicit TVD Runge-Kutta schemes for the temporal advancement are easy to implement, they suffer from small time step limited by the Courant-Friedrichs-Lewy (CFL) condition. When the polynomial order is high or when the grid is stretched due to complex geometries or boundary layers, the convergence rate of explicit schemes slows down rapidly. Solution strategies to remedy this problem include implicit methods and multigrid methods. A novel implicit lower-upper symmetric Gauss-Seidel (LU-SGS) relaxation method is employed as an iterative smoother. It is compared to the explicit TVD Runge-Kutta smoothers. For some p-multigrid calculations, combining implicit and explicit smoothers for different p-levels is also studied. The multigrid method considered is nonlinear and uses Full Approximation Scheme (FAS). An overall speed-up factor of up to 150 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Euler equations for the 3rd order SD method. A study of viscous flux formulations was carried out for the SV method. Three formulations were used to discretize the viscous fluxes: local discontinuous Galerkin (LDG), a penalty method and the 2nd method of Bassi and Rebay. Fourier analysis revealed some interesting advantages for the penalty method. These were implemented in the Navier Stokes solver. An implicit and p-multigrid method was also implemented for the above. An overall speed-up factor of up to 1500 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Navier-Stokes equations. The SV method was also extended to turbulent flows. The RANS based SA model was used to close the Reynolds stresses. The numerical results are very promising and indicate that the approaches have great potentials for 3D flow problems.

  16. Progress with multigrid schemes for hypersonic flow problems

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Swanson, R. C.

    1991-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.

  17. Flow solution on a dual-block grid around an airplane

    NASA Technical Reports Server (NTRS)

    Eriksson, Lars-Erik

    1987-01-01

    The compressible flow around a complex fighter-aircraft configuration (fuselage, cranked delta wing, canard, and inlet) is simulated numerically using a novel grid scheme and a finite-volume Euler solver. The patched dual-block grid is generated by an algebraic procedure based on transfinite interpolation, and the explicit Runge-Kutta time-stepping Euler solver is implemented with a high degree of vectorization on a Cyber 205 processor. Results are presented in extensive graphs and diagrams and characterized in detail. The concentration of grid points near the wing apex in the present scheme is shown to facilitate capture of the vortex generated by the leading edge at high angles of attack and modeling of its interaction with the canard wake.

  18. Stable time filtering of strongly unstable spatially extended systems

    PubMed Central

    Grote, Marcus J.; Majda, Andrew J.

    2006-01-01

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant–Friedrichs–Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection–diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system. PMID:16682626

  19. Stable time filtering of strongly unstable spatially extended systems.

    PubMed

    Grote, Marcus J; Majda, Andrew J

    2006-05-16

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant-Friedrichs-Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection-diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system.

  20. Corrected Implicit Monte Carlo

    DOE PAGES

    Cleveland, Mathew Allen; Wollaber, Allan Benton

    2018-01-02

    Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less

  1. Finite difference methods for transient signal propagation in stratified dispersive media

    NASA Technical Reports Server (NTRS)

    Lam, D. H.

    1975-01-01

    Explicit difference equations are presented for the solution of a signal of arbitrary waveform propagating in an ohmic dielectric, a cold plasma, a Debye model dielectric, and a Lorentz model dielectric. These difference equations are derived from the governing time-dependent integro-differential equations for the electric fields by a finite difference method. A special difference equation is derived for the grid point at the boundary of two different media. Employing this difference equation, transient signal propagation in an inhomogeneous media can be solved provided that the medium is approximated in a step-wise fashion. The solutions are generated simply by marching on in time. It is concluded that while the classical transform methods will remain useful in certain cases, with the development of the finite difference methods described, an extensive class of problems of transient signal propagating in stratified dispersive media can be effectively solved by numerical methods.

  2. Corrected implicit Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cleveland, M. A.; Wollaber, A. B.

    2018-04-01

    In this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle for frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. We present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.

  3. From cat's eyes to disjoint multicellular natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Nicolás, Alfredo; Báez, Elsa; Bermúdez, Blanca

    2011-07-01

    Numerical results of two-dimensional natural convection problems, in air-filled tall cavities, are reported to study the change of the cat's eyes flow as some parameters vary, the aspect ratio A and the angle of inclination ϕ of the cavity, with the Rayleigh number Ra mostly fixed; explicitly, the range of the variation is given by 12⩽A⩽20 and 0°⩽ϕ⩽270°; about Ra=1.1×10. A novelty contribution of this work is the transition from the cat's eyes changes, as A varies, to a disjoint multicellular flow, as ϕ varies. These flows may be modeled by the unsteady Boussinesq approximation in stream function and vorticity variables which is solved with a fixed point iterative process applied to the nonlinear elliptic system that results after time discretization. The validation of the results relies on mesh size and time-step independence studies.

  4. A three-dimensional, time-dependent model of Mobile Bay

    NASA Technical Reports Server (NTRS)

    Pitts, F. H.; Farmer, R. C.

    1976-01-01

    A three-dimensional, time-variant mathematical model for momentum and mass transport in estuaries was developed and its solution implemented on a digital computer. The mathematical model is based on state and conservation equations applied to turbulent flow of a two-component, incompressible fluid having a free surface. Thus, bouyancy effects caused by density differences between the fresh and salt water, inertia from thare river and tidal currents, and differences in hydrostatic head are taken into account. The conservation equations, which are partial differential equations, are solved numerically by an explicit, one-step finite difference scheme and the solutions displayed numerically and graphically. To test the validity of the model, a specific estuary for which scaled model and experimental field data are available, Mobile Bay, was simulated. Comparisons of velocity, salinity and water level data show that the model is valid and a viable means of simulating the hydrodynamics and mass transport in non-idealized estuaries.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew Allen; Wollaber, Allan Benton

    Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less

  6. Given a one-step numerical scheme, on which ordinary differential equations is it exact?

    NASA Astrophysics Data System (ADS)

    Villatoro, Francisco R.

    2009-01-01

    A necessary condition for a (non-autonomous) ordinary differential equation to be exactly solved by a one-step, finite difference method is that the principal term of its local truncation error be null. A procedure to determine some ordinary differential equations exactly solved by a given numerical scheme is developed. Examples of differential equations exactly solved by the explicit Euler, implicit Euler, trapezoidal rule, second-order Taylor, third-order Taylor, van Niekerk's second-order rational, and van Niekerk's third-order rational methods are presented.

  7. Using implicit attitudes of exercise importance to predict explicit exercise dependence symptoms and exercise behaviors.

    PubMed

    Forrest, Lauren N; Smith, April R; Fussner, Lauren M; Dodd, Dorian R; Clerkin, Elise M

    2016-01-01

    "Fast" (i.e., implicit) processing is relatively automatic; "slow" (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence.

  8. Using implicit attitudes of exercise importance to predict explicit exercise dependence symptoms and exercise behaviors

    PubMed Central

    Forrest, Lauren N.; Smith, April R.; Fussner, Lauren M.; Dodd, Dorian R.; Clerkin, Elise M.

    2015-01-01

    Objectives ”Fast” (i.e., implicit) processing is relatively automatic; “slow” (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. Design We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Method Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Results Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Conclusion Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence. PMID:26195916

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volker, Arno; Hunter, Alan

    Anisotropic materials are being used increasingly in high performance industrial applications, particularly in the aeronautical and nuclear industries. Some important examples of these materials are composites, single-crystal and heavy-grained metals. Ultrasonic array imaging in these materials requires exact knowledge of the anisotropic material properties. Without this information, the images can be adversely affected, causing a reduction in defect detection and characterization performance. The imaging operation can be formulated in two consecutive and reciprocal focusing steps, i.e., focusing the sources and then focusing the receivers. Applying just one of these focusing steps yields an interesting intermediate domain. The resulting common focusmore » point gather (CFP-gather) can be interpreted to determine the propagation operator. After focusing the sources, the observed travel-time in the CFP-gather describes the propagation from the focus point to the receivers. If the correct propagation operator is used, the measured travel-times should be the same as the time-reversed focusing operator due to reciprocity. This makes it possible to iteratively update the focusing operator using the data only and allows the material to be imaged without explicit knowledge of the anisotropic material parameters. Furthermore, the determined propagation operator can also be used to invert for the anisotropic medium parameters. This paper details the proposed technique and demonstrates its use on simulated array data from a specimen of Inconel single-crystal alloy commonly used in the aeronautical and nuclear industries.« less

  10. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  11. A time delay controller for magnetic bearings

    NASA Technical Reports Server (NTRS)

    Youcef-Toumi, K.; Reddy, S.

    1991-01-01

    The control of systems with unknown dynamics and unpredictable disturbances has raised some challenging problems. This is particularly important when high system performance needs to be guaranteed at all times. Recently, the Time Delay Control has been suggested as an alternative control scheme. The proposed control system does not require an explicit plant model nor does it depend on the estimation of specific plant parameters. Rather, it combines adaptation with past observations to directly estimate the effect of the plant dynamics. A control law is formulated for a class of dynamic systems and a sufficient condition is presented for control systems stability. The derivation is based on the bounded input-bounded output stability approach using L sub infinity function norms. The control scheme is implemented on a five degrees of freedom high speed and high precision magnetic bearing. The control performance is evaluated using step responses, frequency responses, and disturbance rejection properties. The experimental data show an excellent control performance despite the system complexity.

  12. A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-08-01

    In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.

  13. FT-IR study and solvent-implicit and explicit effect on stepwise tautomerism of Guanylurea: M06-2X as a case of study

    NASA Astrophysics Data System (ADS)

    Karimzadeh, Morteza; Manouchehri, Neda; Saberi, Dariush; Niknam, Khodabakhsh

    2018-06-01

    All 66 conformers of guanylurea were optimized and frequency calculations were performed at M06-2X/6-311++G(d,p) level of theory. Theses conformers were categorized into five tautomers, and the most stable conformer of each tautomer were found. Geometrical parameters indicated that these tautomers have almost planar structure. Complete stepwise tautomerism were studied through both intramolecular proton transfer routs and internal rotations. Results indicated that the proton transfer routs involving four-membered heterocyclic structures were rate-determining steps. Also, intramolecular proton movement having six-membered transition state structures had very low energy barrier comparable to the transition states of internal rotation routs. Differentiation of studied tautomers could easily be done through their FT-IR spectra in the range of 3200 to 3900 cm-1 by comparing absorption bands and intensity of peaks. Solvent-implicit effects on the stability of tautomers were also studied through re-optimization and frequency calculation in four solvents. Water, DMSO, acetone and toluene had stabilization effect on all considered tautomers, but the order of stabilization effect was as follows: water > DMSO > acetone > toluene. Finally, solvent-explicit, base-explicit and acid-explicit effect were also studied by taking place of studied tautomer nearside of acid, base or solvent and optimization of them. Frequency calculation for proton movement by contribution of explicit effect showed that formic acid had a very strong effect on proton transfer from tautomer A1 to tautomer D8 by lowering the energy barrier from 42.57 to 0.8 kcal/mol. In addition, ammonia-explicit effect was found to lower the barrier from 42.57 to 22.46 kcal/mol, but this effect is lower than that of water and methanol-explicit effect.

  14. Thermodynamics of urban population flows.

    PubMed

    Hernando, A; Plastino, A

    2012-12-01

    Orderliness, reflected via mathematical laws, is encountered in different frameworks involving social groups. Here we show that a thermodynamics can be constructed that macroscopically describes urban population flows. Microscopic dynamic equations and simulations with random walkers underlie the macroscopic approach. Our results might be regarded, via suitable analogies, as a step towards building an explicit social thermodynamics.

  15. Improving Expository Writing Skills with Explicit and Strategy Instructional Methods in Inclusive Middle School Classrooms

    ERIC Educational Resources Information Center

    Cihak, David F.; Castle, Kristin

    2011-01-01

    Forty eighth grade students with and without learning disabilities in an inclusive classroom participated in an adapted Step-Up to Writing (Auman, 2002) intervention program. The intervention targeted expository essays and composing topic, detail, transitional, and concluding sentences. A repeated-measures ANOVA indicated that both students with…

  16. Low molecular weight oligomers of amyloid peptides display β-barrel conformations: A replica exchange molecular dynamics study in explicit solvent

    NASA Astrophysics Data System (ADS)

    De Simone, Alfonso; Derreumaux, Philippe

    2010-04-01

    The self-assembly of proteins and peptides into amyloid fibrils is connected to over 40 pathological conditions including neurodegenerative diseases and systemic amyloidosis. Diffusible, low molecular weight protein and peptide oligomers that form in the early steps of aggregation appear to be the harmful cytotoxic species in the molecular etiology of these diseases. So far, the structural characterization of these oligomers has remained elusive owing to their transient and dynamic features. We here address, by means of full atomistic replica exchange molecular dynamics simulations, the energy landscape of heptamers of the amyloidogenic peptide NHVTLSQ from the beta-2 microglobulin protein. The simulations totaling 5 μs show that low molecular weight oligomers in explicit solvent consist of β-barrels in equilibrium with amorphous states and fibril-like assemblies. The results, also accounting for the influence of the pH on the conformational properties, provide a strong evidence of the formation of transient β-barrel assemblies in the early aggregation steps of amyloid-forming systems. Our findings are discussed in terms of oligomers cytotoxicity.

  17. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1994-01-01

    The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

  18. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1/24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  19. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  20. Trend assessment: applications for hydrology and climate research

    NASA Astrophysics Data System (ADS)

    Kallache, M.; Rust, H. W.; Kropp, J.

    2005-02-01

    The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.

  1. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  2. Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005

    USGS Publications Warehouse

    Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.

    2012-01-01

    As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.

  3. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  4. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    PubMed

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  5. Numerical simulation of pounding damage to caisson under storm surge

    NASA Astrophysics Data System (ADS)

    Yu, Chen

    2018-06-01

    In this paper, a new method for the numerical simulation of structural model is proposed, which is employed to analyze the pounding response of caissons subjected to storm surge loads. According to the new method, the simulation process is divided into two steps. Firstly, the wave propagation caused by storm surge is simulated by the wave-generating tool of Flow-3D, and recording the wave force time history on the caisson. Secondly, a refined 3D finite element model of caisson is established, and the wave force load is applied on the caisson according to the measured data in the first step for further analysis of structural pounding response using the explicit solver of LSDYNA. The whole simulation of pounding response of a caisson caused by "Sha Lijia" typhoon is carried out. The results show that the different wave direction results in the different angle caisson collisions, which will lead to different failure mode of caisson, and when the angle of 60 between wave direction and front/back wall is simulated, the numerical pounding failure mode is consistent with the situation.

  6. Health care priority setting: principles, practice and challenges

    PubMed Central

    Mitton, Craig; Donaldson, Cam

    2004-01-01

    Background Health organizations the world over are required to set priorities and allocate resources within the constraint of limited funding. However, decision makers may not be well equipped to make explicit rationing decisions and as such often rely on historical or political resource allocation processes. One economic approach to priority setting which has gained momentum in practice over the last three decades is program budgeting and marginal analysis (PBMA). Methods This paper presents a detailed step by step guide for carrying out a priority setting process based on the PBMA framework. This guide is based on the authors' experience in using this approach primarily in the UK and Canada, but as well draws on a growing literature of PBMA studies in various countries. Results At the core of the PBMA approach is an advisory panel charged with making recommendations for resource re-allocation. The process can be supported by a range of 'hard' and 'soft' evidence, and requires that decision making criteria are defined and weighted in an explicit manner. Evaluating the process of PBMA using an ethical framework, and noting important challenges to such activity including that of organizational behavior, are shown to be important aspects of developing a comprehensive approach to priority setting in health care. Conclusion Although not without challenges, international experience with PBMA over the last three decades would indicate that this approach has the potential to make substantial improvement on commonly relied upon historical and political decision making processes. In setting out a step by step guide for PBMA, as is done in this paper, implementation by decision makers should be facilitated. PMID:15104792

  7. Assessment of the GECKO-A modeling tool using chamber observations for C12 alkanes

    NASA Astrophysics Data System (ADS)

    Aumont, B.; La, S.; Ouzebidour, F.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J. M.; Hodzic, A.; Madronich, S.; Yee, L. D.; Loza, C. L.; Craven, J. S.; Zhang, X.; Seinfeld, J.

    2013-12-01

    Secondary Organic Aerosol (SOA) production and ageing is the result of atmospheric oxidation processes leading to the progressive formation of organic species with higher oxidation state and lower volatility. Explicit chemical mechanisms reflect our understanding of these multigenerational oxidation steps. Major uncertainties remain concerning the processes leading to SOA formation and the development, assessment and improvement of such explicit schemes is therefore a key issue. The development of explicit mechanism to describe the oxidation of long chain hydrocarbons is however a challenge. Indeed, explicit oxidation schemes involve a large number of reactions and secondary organic species, far exceeding the size of chemical schemes that can be written manually. The chemical mechanism generator GECKO-A (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is a computer program designed to overcome this difficulty. GECKO-A generates gas phase oxidation schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In this study, we examine the ability of the generated schemes to explain SOA formation observed in the Caltech Environmental Chambers from various C12 alkane isomers and under high NOx and low NOx conditions. First results show that the model overestimates both the SOA yields and the O/C ratios. Various sensitivity tests are performed to explore processes that might be responsible for these disagreements.

  8. An energy- and charge-conserving, nonlinearly implicit, electromagnetic 1D-3V Vlasov-Darwin particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2014-10-01

    A recent proof-of-principle study proposes a nonlinear electrostatic implicit particle-in-cell (PIC) algorithm in one dimension (Chen et al., 2011). The algorithm employs a kinetically enslaved Jacobian-free Newton-Krylov (JFNK) method, and conserves energy and charge to numerical round-off. In this study, we generalize the method to electromagnetic simulations in 1D using the Darwin approximation to Maxwell's equations, which avoids radiative noise issues by ordering out the light wave. An implicit, orbit-averaged, time-space-centered finite difference scheme is employed in both the 1D Darwin field equations (in potential form) and the 1D-3V particle orbit equations to produce a discrete system that remains exactly charge- and energy-conserving. Furthermore, enabled by the implicit Darwin equations, exact conservation of the canonical momentum per particle in any ignorable direction is enforced via a suitable scattering rule for the magnetic field. We have developed a simple preconditioner that targets electrostatic waves and skin currents, and allows us to employ time steps O(√{mi /me } c /veT) larger than the explicit CFL. Several 1D numerical experiments demonstrate the accuracy, performance, and conservation properties of the algorithm. In particular, the scheme is shown to be second-order accurate, and CPU speedups of more than three orders of magnitude vs. an explicit Vlasov-Maxwell solver are demonstrated in the "cold" plasma regime (where kλD ≪ 1).

  9. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  10. Massively parallel first-principles simulation of electron dynamics in materials

    DOE PAGES

    Draeger, Erik W.; Andrade, Xavier; Gunnels, John A.; ...

    2017-08-01

    Here we present a highly scalable, parallel implementation of first-principles electron dynamics coupled with molecular dynamics (MD). By using optimized kernels, network topology aware communication, and by fully distributing all terms in the time-dependent Kohn–Sham equation, we demonstrate unprecedented time to solution for disordered aluminum systems of 2000 atoms (22,000 electrons) and 5400 atoms (59,400 electrons), with wall clock time as low as 7.5 s per MD time step. Despite a significant amount of non-local communication required in every iteration, we achieved excellent strong scaling and sustained performance on the Sequoia Blue Gene/Q supercomputer at LLNL. We obtained up tomore » 59% of the theoretical sustained peak performance on 16,384 nodes and performance of 8.75 Petaflop/s (43% of theoretical peak) on the full 98,304 node machine (1,572,864 cores). Lastly, scalable explicit electron dynamics allows for the study of phenomena beyond the reach of standard first-principles MD, in particular, materials subject to strong or rapid perturbations, such as pulsed electromagnetic radiation, particle irradiation, or strong electric currents.« less

  11. CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues

    NASA Technical Reports Server (NTRS)

    Palmieri, J. V.; Rathjen, K. A.

    1978-01-01

    The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.

  12. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  13. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    NASA Astrophysics Data System (ADS)

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  14. Massively parallel first-principles simulation of electron dynamics in materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, Erik W.; Andrade, Xavier; Gunnels, John A.

    Here we present a highly scalable, parallel implementation of first-principles electron dynamics coupled with molecular dynamics (MD). By using optimized kernels, network topology aware communication, and by fully distributing all terms in the time-dependent Kohn–Sham equation, we demonstrate unprecedented time to solution for disordered aluminum systems of 2000 atoms (22,000 electrons) and 5400 atoms (59,400 electrons), with wall clock time as low as 7.5 s per MD time step. Despite a significant amount of non-local communication required in every iteration, we achieved excellent strong scaling and sustained performance on the Sequoia Blue Gene/Q supercomputer at LLNL. We obtained up tomore » 59% of the theoretical sustained peak performance on 16,384 nodes and performance of 8.75 Petaflop/s (43% of theoretical peak) on the full 98,304 node machine (1,572,864 cores). Lastly, scalable explicit electron dynamics allows for the study of phenomena beyond the reach of standard first-principles MD, in particular, materials subject to strong or rapid perturbations, such as pulsed electromagnetic radiation, particle irradiation, or strong electric currents.« less

  15. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  16. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  17. Two dimensional fully nonlinear numerical wave tank based on the BEM

    NASA Astrophysics Data System (ADS)

    Sun, Zhe; Pang, Yongjie; Li, Hongwei

    2012-12-01

    The development of a two dimensional numerical wave tank (NWT) with a rocker or piston type wavemaker based on the high order boundary element method (BEM) and mixed Eulerian-Lagrangian (MEL) is examined. The cauchy principle value (CPV) integral is calculated by a special Gauss type quadrature and a change of variable. In addition the explicit truncated Taylor expansion formula is employed in the time-stepping process. A modified double nodes method is assumed to tackle the corner problem, as well as the damping zone technique is used to absorb the propagation of the free surface wave at the end of the tank. A variety of waves are generated by the NWT, for example; a monochromatic wave, solitary wave and irregular wave. The results confirm the NWT model is efficient and stable.

  18. Automatic contact in DYNA3D for vehicle crashworthiness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-15

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less

  19. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  20. Periodic MHD flow with temperature dependent viscosity and thermal conductivity past an isothermal oscillating cylinder

    NASA Astrophysics Data System (ADS)

    Ahmed, Rubel; Rana, B. M. Jewel; Ahmmed, S. F.

    2017-06-01

    Temperature dependent viscosity and thermal conducting heat and mass transfer flow with chemical reaction and periodic magnetic field past an isothermal oscillating cylinder have been considered. The partial dimensionless equations governing the flow have been solved numerically by applying explicit finite difference method with the help Compaq visual 6.6a. The obtained outcome of this inquisition has been discussed for different values of well-known flow parameters with different time steps and oscillation angle. The effect of chemical reaction and periodic MHD parameters on the velocity field, temperature field and concentration field, skin-friction, Nusselt number and Sherwood number have been studied and results are presented by graphically. The novelty of the present problem is to study the streamlines by taking into account periodic magnetic field.

  1. Constraining the loop quantum gravity parameter space from phenomenology

    NASA Astrophysics Data System (ADS)

    Brahma, Suddhasattwa; Ronco, Michele

    2018-03-01

    Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.

  2. Students' Task Interpretation and Conceptual Understanding in an Electronics Laboratory

    ERIC Educational Resources Information Center

    Rivera-Reyes, Presentacion; Lawanto, Oenardi; Pate, Michael L.

    2017-01-01

    Task interpretation is a critical first step for students in the process of self-regulated learning, and a key determinant when they set goals in their learning and select strategies in assigned work. This paper focuses on the explicit and implicit aspects of task interpretation based on Hadwin's model. Laboratory activities improve students'…

  3. A First Step Forward: Context Assessment

    ERIC Educational Resources Information Center

    Conner, Ross F.; Fitzpatrick, Jody L.; Rog, Debra J.

    2012-01-01

    In this chapter, we revisit and expand the context framework of Debra Rog, informed by three cases and by new aspects that we have identified. We then propose a way to move the framework into action, making context explicit. Based on the framework's components, we describe and illustrate a process we label context assessment (CA), which provides a…

  4. Review of Use of Animation as a Supplementary Learning Material of Physiology Content in Four Academic Years

    ERIC Educational Resources Information Center

    Hwang, Isabel; Tam, Michael; Lam, Shun Leung; Lam, Paul

    2012-01-01

    Dynamic concepts are difficult to explain in traditional media such as still slides. Animations seem to offer the advantage of delivering better representations of these concepts. Compared with static images and text, animations can present procedural information (e.g. biochemical reaction steps, physiological activities) more explicitly as they…

  5. Adaptive management of forest ecosystems: did some rubber hit the road?

    Treesearch

    B.T. Bormann; R.W. Haynes; J.R. Martin

    2007-01-01

    Although many scientists recommend adaptive management for large forest tracts, there is little evidence that its use has been effective at this scale. One exception is the 10-million-hectare Northwest Forest Plan, which explicitly included adaptive management in its design. Evidence from 10 years of implementing the plan suggests that formalizing adaptive steps and...

  6. Transforming English Language Learners' Work Readiness: Case Studies in Explicit, Work-Specific Vocabulary Instruction

    ERIC Educational Resources Information Center

    Madrigal-Hopes, Diana L.; Villavicencio, Edna; Foote, Martha M.; Green, Chris

    2014-01-01

    This qualitative study examined the impact of a six-step framework for work-specific vocabulary instruction in adult English language learners (ELLs). Guided by research in English as a second language (ESL) methodology and the transactional theory, the researchers sought to unveil how these processes supported the acquisition and application of…

  7. Immediate and Long-Term Effects of "Learning by Teaching" on Knowledge of Cognition

    ERIC Educational Resources Information Center

    Gutman, Mary

    2017-01-01

    Learning By Teaching (LBT) programs for pre-service teachers in two different environments (technological and face-to-face) were compared using 100 pre-service teachers as subjects. Both programs were based on the IMPROVE instructional method which provides explicit metacognitive steps for LBT with a dual perspective (2P): that of the teacher and…

  8. Generalized binomial τ-leap method for biochemical kinetics incorporating both delay and intrinsic noise

    NASA Astrophysics Data System (ADS)

    Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin

    2008-05-01

    The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.

  9. Time-Dependent Parabolic Finite Difference Formulation for Harmonic Sound Propagation in a Two-Dimensional Duct with Flow

    NASA Technical Reports Server (NTRS)

    Kreider, Kevin L.; Baumeister, Kenneth J.

    1996-01-01

    An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  10. THOR: an open-source exo-GCM

    NASA Astrophysics Data System (ADS)

    Grosheintz, Luc; Mendonça, João; Käppeli, Roger; Lukas Grimm, Simon; Mishra, Siddhartha; Heng, Kevin

    2015-12-01

    In this talk, I will present THOR, the first fully conservative, GPU-accelerated exo-GCM (general circulation model) on a nearly uniform, global grid that treats shocks and is non-hydrostatic. THOR will be freely available to the community as a standard tool.Unlike most GCMs THOR solves the full, non-hydrostatic Euler equations instead of the primitive equations. The equations are solved on a global three-dimensional icosahedral grid by a second order Finite Volume Method (FVM). Icosahedral grids are nearly uniform refinements of an icosahedron. We've implemented three different versions of this grid. FVM conserves the prognostic variables (density, momentum and energy) exactly and doesn't require a diffusion term (artificial viscosity) in the Euler equations to stabilize our solver. Historically FVM was designed to treat discontinuities correctly. Hence it excels at resolving shocks, including those present in hot exoplanetary atmospheres.Atmospheres are generally in near hydrostatic equilibrium. We therefore implement a well-balancing technique recently developed at the ETH Zurich. This well-balancing ensures that our FVM maintains hydrostatic equilibrium to machine precision. Better yet, it is able to resolve pressure perturbations from this equilibrium as small as one part in 100'000. It is important to realize that these perturbations are significantly smaller than the truncation error of the same scheme without well-balancing. If during the course of the simulation (due to forcing) the atmosphere becomes non-hydrostatic, our solver continues to function correctly.THOR just passed an important mile stone. We've implemented the explicit part of the solver. The explicit solver is useful to study instabilities or local problems on relatively short time scales. I'll show some nice properties of the explicit THOR. An explicit solver is not appropriate for climate study because the time step is limited by the sound speed. Therefore, we are working on the first fully implicit GCM. By ESS3, I hope to present results for the advection equation.THOR is part of the Exoclimes Simulation Platform (ESP), a set of open-source community codes for simulating and understanding the atmospheres of exoplanets. The ESP also includes tools for radiative transfer and retrieval (HELIOS), an opacity calculator (HELIOS-K), and a chemical kinetics solver (VULCAN). We expect to publicly release an initial version of THOR in 2016 on www.exoclime.org.

  11. Learning to Predict Chemical Reactions

    PubMed Central

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal (http://cdb.ics.uci.edu) under the Toolkits section. PMID:21819139

  12. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  13. Skipping the real world: Classification of PolSAR images without explicit feature extraction

    NASA Astrophysics Data System (ADS)

    Hänsch, Ronny; Hellwich, Olaf

    2018-06-01

    The typical processing chain for pixel-wise classification from PolSAR images starts with an optional preprocessing step (e.g. speckle reduction), continues with extracting features projecting the complex-valued data into the real domain (e.g. by polarimetric decompositions) which are then used as input for a machine-learning based classifier, and ends in an optional postprocessing (e.g. label smoothing). The extracted features are usually hand-crafted as well as preselected and represent (a somewhat arbitrary) projection from the complex to the real domain in order to fit the requirements of standard machine-learning approaches such as Support Vector Machines or Artificial Neural Networks. This paper proposes to adapt the internal node tests of Random Forests to work directly on the complex-valued PolSAR data, which makes any explicit feature extraction obsolete. This approach leads to a classification framework with a significantly decreased computation time and memory footprint since no image features have to be computed and stored beforehand. The experimental results on one fully-polarimetric and one dual-polarimetric dataset show that, despite the simpler approach, accuracy can be maintained (decreased by only less than 2 % for the fully-polarimetric dataset) or even improved (increased by roughly 9 % for the dual-polarimetric dataset).

  14. Fractional cable model for signal conduction in spiny neuronal dendrites

    NASA Astrophysics Data System (ADS)

    Vitali, Silvia; Mainardi, Francesco

    2017-06-01

    The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.

  15. A splitting integration scheme for the SPH simulation of concentrated particle suspensions

    NASA Astrophysics Data System (ADS)

    Bian, Xin; Ellero, Marco

    2014-01-01

    Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.

  16. Non-hydrostatic general circulation model of the Venus atmosphere

    NASA Astrophysics Data System (ADS)

    Rodin, Alexander V.; Mingalev, Igor; Orlov, Konstantin; Ignatiev, Nikolay

    We present the first non-hydrostatic global circulation model of the Venus atmosphere based on the complete set of gas dynamics equations. The model employs a spatially uniform triangular mesh that allows to avoid artificial damping of the dynamical processes in the polar regions, with altitude as a vertical coordinate. Energy conversion from the solar flux into atmospheric motion is described via explicitly specified heating and cooling rates or, alternatively, with help of the radiation block based on comprehensive treatment of the Venus atmosphere spectroscopy, including line mixing effects in CO2 far wing absorption. Momentum equations are integrated using the semi-Lagrangian explicit scheme that provides high accuracy of mass and energy conservation. Due to high vertical grid resolution required by gas dynamics calculations, the model is integrated on the short time step less than one second. The model reliably repro-duces zonal superrotation, smoothly extending far below the cloud layer, tidal patterns at the cloud level and above, and non-rotating, sun-synchronous global convective cell in the upper atmosphere. One of the most interesting features of the model is the development of the polar vortices resembling those observed by Venus Express' VIRTIS instrument. Initial analysis of the simulation results confirms the hypothesis that it is thermal tides that provides main driver for the superrotation.

  17. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  18. Discrete-time Quantum Walks via Interchange Framework and Memory in Quantum Evolution

    NASA Astrophysics Data System (ADS)

    Dimcovic, Zlatko

    One of the newer and rapidly developing approaches in quantum computing is based on "quantum walks," which are quantum processes on discrete space that evolve in either discrete or continuous time and are characterized by mixing of components at each step. The idea emerged in analogy with the classical random walks and stochastic techniques, but these unitary processes are very different even as they have intriguing similarities. This thesis is concerned with study of discrete-time quantum walks. The original motivation from classical Markov chains required for discrete-time quantum walks that one adds an auxiliary Hilbert space, unrelated to the one in which the system evolves, in order to be able to mix components in that space and then take the evolution steps accordingly (based on the state in that space). This additional, "coin," space is very often an internal degree of freedom like spin. We have introduced a general framework for construction of discrete-time quantum walks in a close analogy with the classical random walks with memory that is rather different from the standard "coin" approach. In this method there is no need to bring in a different degree of freedom, while the full state of the system is still described in the direct product of spaces (of states). The state can be thought of as an arrow pointing from the previous to the current site in the evolution, representing the one-step memory. The next step is then controlled by a single local operator assigned to each site in the space, acting quite like a scattering operator. This allows us to probe and solve some problems of interest that have not had successful approaches with "coined" walks. We construct and solve a walk on the binary tree, a structure of great interest but until our result without an explicit discrete time quantum walk, due to difficulties in managing coin spaces necessary in the standard approach. Beyond algorithmic interests, the model based on memory allows one to explore effects of history on the quantum evolution and the subtle emergence of classical features as "memory" is explicitly kept for additional steps. We construct and solve a walk with an additional correlation step, finding interesting new features. On the other hand, the fact that the evolution is driven entirely by a local operator, not involving additional spaces, enables us to choose the Fourier transform as an operator completely controlling the evolution. This in turn allows us to combine the quantum walk approach with Fourier transform based techniques, something decidedly not possible in classical computational physics. We are developing a formalism for building networks manageable by walks constructed with this framework, based on the surprising efficiency of our framework in discovering internals of a simple network that we so far solved. Finally, in line with our expectation that the field of quantum walks can take cues from the rich history of development of the classical stochastic techniques, we establish starting points for the work on non-Abelian quantum walks, with a particular quantum-walk analog of the classical "card shuffling," the walk on the permutation group. In summary, this thesis presents a new framework for construction of discrete time quantum walks, employing and exploring memoried nature of unitary evolution. It is applied to fully solving the problems of: A walk on the binary tree and exploration of the quantum-to-classical transition with increased correlation length (history). It is then used for simple network discovery, and to lay the groundwork for analysis of complex networks, based on combined power of efficient exploration of the Hilbert space (as a walk mixing components) and Fourier transformation (since we can choose this for the evolution operator). We hope to establish this as a general technique as its power would be unmatched by any approaches available in the classical computing. We also looked at the promising and challenging prospect of walks on non-Abelian structures by setting up the problem of "quantum card shuffling," a quantum walk on the permutation group. Relation to other work is thoroughly discussed throughout, along with examination of the context of our work and overviews of our current and future work.

  19. Numerical calculation on a two-step subdiffusion behavior of lateral protein movement in plasma membranes

    NASA Astrophysics Data System (ADS)

    Sumi, Tomonari; Okumoto, Atsushi; Goto, Hitoshi; Sekino, Hideo

    2017-10-01

    A two-step subdiffusion behavior of lateral movement of transmembrane proteins in plasma membranes has been observed by using single-molecule experiments. A nested double-compartment model where large compartments are divided into several smaller ones has been proposed in order to explain this observation. These compartments are considered to be delimited by membrane-skeleton "fences" and membrane-protein "pickets" bound to the fences. We perform numerical simulations of a master equation using a simple two-dimensional lattice model to investigate the heterogeneous diffusion dynamics behavior of transmembrane proteins within plasma membranes. We show that the experimentally observed two-step subdiffusion process can be described using fence and picket models combined with decreased local diffusivity of transmembrane proteins in the vicinity of the pickets. This allows us to explain the two-step subdiffusion behavior without explicitly introducing nested double compartments.

  20. Implicit and Explicit Memory for Affective Passages in Temporal Lobectomy Patients

    ERIC Educational Resources Information Center

    Burton, Leslie A.; Rabin, Laura; Vardy, Susan Bernstein; Frohlich, Jonathan; Porter, Gwinne Wyatt; Dimitri, Diana; Cofer, Lucas; Labar, Douglas

    2008-01-01

    Eighteen temporal lobectomy patients (9 left, LTL; 9 right, RTL) were administered four verbal tasks, an Affective Implicit Task, a Neutral Implicit Task, an Affective Explicit Task, and a Neutral Explicit Task. For the Affective and Neutral Implicit Tasks, participants were timed while reading aloud passages with affective or neutral content,…

  1. Implicit timing activates the left inferior parietal cortex.

    PubMed

    Wiener, Martin; Turkeltaub, Peter E; Coslett, H Branch

    2010-11-01

    Coull and Nobre (2008) suggested that tasks that employ temporal cues might be divided on the basis of whether these cues are explicitly or implicitly processed. Furthermore, they suggested that implicit timing preferentially engages the left cerebral hemisphere. We tested this hypothesis by conducting a quantitative meta-analysis of eleven neuroimaging studies of implicit timing using the activation-likelihood estimation (ALE) algorithm (Turkeltaub, Eden, Jones, & Zeffiro, 2002). Our analysis revealed a single but robust cluster of activation-likelihood in the left inferior parietal cortex (supramarginal gyrus). This result is in accord with the hypothesis that the left hemisphere subserves implicit timing mechanisms. Furthermore, in conjunction with a previously reported meta-analysis of explicit timing tasks, our data support the claim that implicit and explicit timing are supported by at least partially distinct neural structures. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Towards automated assistance for operating home medical devices.

    PubMed

    Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D

    2010-01-01

    To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.

  3. One-jet inclusive cross section at order a(s)-cubed - Gluons only

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen D.; Kunszt, Zoltan; Soper, Davison E.

    1989-01-01

    A complete calculation of the hadron jet cross-section at one order beyond the Born approximation is performed for the simplified case in which there are only gluons. The general structure of the differences from the lowest-order cross-section are described. This step allows two important improvements in the understanding of the theoretical hadron jet cross-section: first, the cross section at this order displays explicit dependence on the jet cone size, so that explicit account can be taken of the differences in jet definitions employed by different experiments; second, the magnitude of the uncertainty of the theoretical cross-section due to the arbitrary choice of the factorization scale has been reduced by a factor of two to three.

  4. The time course of explicit and implicit categorization.

    PubMed

    Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A

    2015-10-01

    Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.

  5. Lie symmetry analysis, explicit solutions and conservation laws for the space-time fractional nonlinear evolution equations

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa; Baleanu, Dumitru

    2018-04-01

    This paper studies the symmetry analysis, explicit solutions, convergence analysis, and conservation laws (Cls) for two different space-time fractional nonlinear evolution equations with Riemann-Liouville (RL) derivative. The governing equations are reduced to nonlinear ordinary differential equation (ODE) of fractional order using their Lie point symmetries. In the reduced equations, the derivative is in Erdelyi-Kober (EK) sense, power series technique is applied to derive an explicit solutions for the reduced fractional ODEs. The convergence of the obtained power series solutions is also presented. Moreover, the new conservation theorem and the generalization of the Noether operators are developed to construct the nonlocal Cls for the equations . Some interesting figures for the obtained explicit solutions are presented.

  6. Improving Undergraduates' Critical Thinking Skills through Peer-learning Workshops

    NASA Astrophysics Data System (ADS)

    Cole, S. B.

    2013-12-01

    Critical thinking skills are among the primary learning outcomes of undergraduate education, but they are rarely explicitly taught. Here I present a two-fold study aimed at analyzing undergraduate students' critical thinking and information literacy skills, and explicitly teaching these skills, in an introductory Planetary Science course. The purpose of the research was to examine the students' information-filtering skills and to develop a short series of peer-learning workshops that would enhance these skills in both the students' coursework and their everyday lives. The 4 workshops are designed to be easily adaptable to any college course, with little impact on the instructor's workload. They make use of material related to the course's content, enabling the instructor to complement a pre-existing syllabus while explicitly teaching students skills essential to their academic and non-academic lives. In order to gain an understanding of undergraduates' existing information-filtering skills, I examined the material that they consider to be appropriate sources for a college paper. I analyzed the Essay 1 bibliographies of a writing-based introductory Planetary Science course for non-majors. The 22 essays cited 135 (non-unique) references, only half of which were deemed suitable by their instructors. I divided the sources into several categories and classified them as recommended, recommended with caution, and unsuitable for this course. The unsuitable sources ranged from peer-reviewed journal articles, which these novice students were not equipped to properly interpret, to websites that cannot be relied upon for scientific information (e.g., factoidz.com, answersingenesis.org). The workshops aim to improve the students' information-filtering skills by sequentially teaching them to evaluate search engine results, identify claims made on websites and in news articles, evaluate the evidence presented, and identify specific correlation/causation fallacies in news articles and advertisements. Students work in groups of 3-4, discussing worksheet questions that lead them step-by-step through 1) verbalizing their preconceptions of the workshop theme, 2) dissecting instructional materials to discover the cognitive processes they already use, 3) applying skills step-by-step in real-world situations (search engine results, news articles, ads, etc.), and 4) using metacognitive strategies of questioning and reflecting. Student participants in the pilot study often verbalized metacognition, and retained concepts as evidenced by a post-test conducted 2 months after the first workshop. They additionally reported consciously using skills learned in the workshops over a year later.

  7. Assessment of the Simulated Molecular Composition with the GECKO-A Modeling Tool Using Chamber Observations for α-Pinene.

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Camredon, M.; Isaacman-VanWertz, G. A.; Karam, C.; Valorso, R.; Madronich, S.; Kroll, J. H.

    2016-12-01

    Gas phase oxidation of VOC is a gradual process leading to the formation of multifunctional organic compounds, i.e., typically species with higher oxidation state, high water solubility and low volatility. These species contribute to the formation of secondary organic aerosols (SOA) viamultiphase processes involving a myriad of organic species that evolve through thousands of reactions and gas/particle mass exchanges. Explicit chemical mechanisms reflect the understanding of these multigenerational oxidation steps. These mechanisms rely directly on elementary reactions to describe the chemical evolution and track the identity of organic carbon through various phases down to ultimate oxidation products. The development, assessment and improvement of such explicit schemes is a key issue, as major uncertainties remain on the chemical pathways involved during atmospheric oxidation of organic matter. An array of mass spectrometric techniques (CIMS, PTRMS, AMS) was recently used to track the composition of organic species during α-pinene oxidation in the MIT environmental chamber, providing an experimental database to evaluate and improve explicit mechanisms. In this study, the GECKO-A tool (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to generate fully explicit oxidation schemes for α-pinene multiphase oxidation simulating the MIT experiment. The ability of the GECKO-A chemical scheme to explain the organic molecular composition in the gas and the condensed phases is explored. First results of this model/observation comparison at the molecular level will be presented.

  8. Spatially-explicit estimates of greenhouse-gas payback times for perennial cellulosic biomass production on open lands in the Lake States

    NASA Astrophysics Data System (ADS)

    Sahajpal, R.

    2015-12-01

    The development of renewable energy sources is an integral step towards mitigating the carbon dioxide induced component of climate change. One important renewable source is plant biomass, comprising both food crops such as corn (Zea mays) and cellulosic biomass from short-rotation woody crops (SRWC) such as hybrid-poplar (Populus spp.) and Willow (Salix spp.). Due to their market acceptability and excellent energy balance, cellulosic feedstocks represent an abundant and if managed properly, a carbon-neutral and environmentally beneficial resource. We evaluate how site variability impacts the greenhouse-gas (GHG) benefits of SRWC plantations on lands potentially suited for bioenergy feedstock production in the Lake States (Minnesota, Wisconsin, Michigan). We combine high-resolution, spatially-explicit estimates of biomass, soil organic carbon and nitrous oxide emissions for SRWC plantations from the Environmental Policy Integrated Climate (EPIC) model along with life cycle analysis results from the GREET model to determine the greenhouse-gas payback time (GPBT) or the time needed before the GHG savings due to displacement of fossil fuels exceeds the initial losses from plantation establishment. We calibrate our models using unique yield and N2O emission data from sites across the Lake states that have been converted from pasture and hayfields to SRWC plantations. Our results show a reduction of 800,000 ha in non-agricultural open land availability for biomass production, a loss of nearly 37% (see attached figure). Overall, GPBTs range between 1 and 38 years, with the longest GPBTs occurring in the northern Lake states. Initial soil nitrate levels and site drainage potential explain more than half of the variation in GPBTs. Our results indicate a rapidly closing window of opportunity to establish a sustainable cellulosic feedstock economy in the Lake States.

  9. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  10. Implicit and Explicit Knowledge Both Improve Dual Task Performance in a Continuous Pursuit Tracking Task.

    PubMed

    Ewolds, Harald E; Bröker, Laura; de Oliveira, Rita F; Raab, Markus; Künzell, Stefan

    2017-01-01

    The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system.

  11. Implicit and Explicit Knowledge Both Improve Dual Task Performance in a Continuous Pursuit Tracking Task

    PubMed Central

    Ewolds, Harald E.; Bröker, Laura; de Oliveira, Rita F.; Raab, Markus; Künzell, Stefan

    2017-01-01

    The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system. PMID:29312083

  12. Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.

    2014-12-01

    Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.

  13. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  14. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  15. What Did They Learn in School Today? A Method for Exploring Aspects of Learning in Physical Education

    ERIC Educational Resources Information Center

    Quennerstedt, Mikael; Annerstedt, Claes; Barker, Dean; Karlefors, Inger; Larsson, Håkan; Redelius, Karin; Öhman, Marie

    2014-01-01

    This paper outlines a method for exploring learning in educational practice. The suggested method combines an explicit learning theory with robust methodological steps in order to explore aspects of learning in school physical education. The design of the study is based on sociocultural learning theory, and the approach adds to previous research…

  16. The key to success in elite athletes? Explicit and implicit motor learning in youth elite and non-elite soccer players.

    PubMed

    Verburgh, L; Scherder, E J A; van Lange, P A M; Oosterlaan, J

    2016-09-01

    In sports, fast and accurate execution of movements is required. It has been shown that implicitly learned movements might be less vulnerable than explicitly learned movements to stressful and fast changing circumstances that exist at the elite sports level. The present study provides insight in explicit and implicit motor learning in youth soccer players with different expertise levels. Twenty-seven youth elite soccer players and 25 non-elite soccer players (aged 10-12) performed a serial reaction time task (SRTT). In the SRTT, one of the sequences must be learned explicitly, the other was implicitly learned. No main effect of group was found for implicit and explicit learning on mean reaction time (MRT) and accuracy. However, for MRT, an interaction was found between learning condition, learning phase and group. Analyses showed no group effects for the explicit learning condition, but youth elite soccer players showed better learning in the implicit learning condition. In particular, during implicit motor learning youth elite soccer showed faster MRTs in the early learning phase and earlier reached asymptote performance in terms of MRT. Present findings may be important for sports because children with superior implicit learning abilities in early learning phases may be able to learn more (durable) motor skills in a shorter time period as compared to other children.

  17. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  18. EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Junjie; Wu, Xin; Huang, Guoqing

    2017-01-01

    We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step thanmore » the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.« less

  19. FT-IR study and solvent-implicit and explicit effect on stepwise tautomerism of Guanylurea: M06-2X as a case of study.

    PubMed

    Karimzadeh, Morteza; Manouchehri, Neda; Saberi, Dariush; Niknam, Khodabakhsh

    2018-06-15

    All 66 conformers of guanylurea were optimized and frequency calculations were performed at M06-2X/6-311++G(d,p) level of theory. Theses conformers were categorized into five tautomers, and the most stable conformer of each tautomer were found. Geometrical parameters indicated that these tautomers have almost planar structure. Complete stepwise tautomerism were studied through both intramolecular proton transfer routs and internal rotations. Results indicated that the proton transfer routs involving four-membered heterocyclic structures were rate-determining steps. Also, intramolecular proton movement having six-membered transition state structures had very low energy barrier comparable to the transition states of internal rotation routs. Differentiation of studied tautomers could easily be done through their FT-IR spectra in the range of 3200 to 3900cm -1 by comparing absorption bands and intensity of peaks. Solvent-implicit effects on the stability of tautomers were also studied through re-optimization and frequency calculation in four solvents. Water, DMSO, acetone and toluene had stabilization effect on all considered tautomers, but the order of stabilization effect was as follows: water>DMSO>acetone>toluene. Finally, solvent-explicit, base-explicit and acid-explicit effect were also studied by taking place of studied tautomer nearside of acid, base or solvent and optimization of them. Frequency calculation for proton movement by contribution of explicit effect showed that formic acid had a very strong effect on proton transfer from tautomer A1 to tautomer D8 by lowering the energy barrier from 42.57 to 0.8kcal/mol. In addition, ammonia-explicit effect was found to lower the barrier from 42.57 to 22.46kcal/mol, but this effect is lower than that of water and methanol-explicit effect. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  1. The Time Course of Explicit and Implicit Categorization

    PubMed Central

    Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.

    2015-01-01

    Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556

  2. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing

    NASA Astrophysics Data System (ADS)

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko

    2007-10-01

    The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak secondary wave reproduction in some cases) consistency of the numerical modelling results with known stationary linear test solutions. Numerical stability of the developed model is investigated with respect to the reference state choice, modelling dynamics of a stationary front. The horizontally area-mean reference temperature proves to be the optimal stability warrant. The numerical scheme with explicit residual in the vertical forcing term becomes unstable for cross-frontal temperature differences exceeding 30 K. Stability is restored, if the vertical forcing is treated implicitly, which enables to use time steps, comparable with the hydrostatic SISL.

  3. An adaptive, implicit, conservative, 1D-2V multi-species Vlasov-Fokker-Planck multi-scale solver in planar geometry

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Chacón, L.; Simakov, A. N.

    2018-07-01

    We consider a 1D-2V Vlasov-Fokker-Planck multi-species ionic description coupled to fluid electrons. We address temporal stiffness with implicit time stepping, suitably preconditioned. To address temperature disparity in time and space, we extend the conservative adaptive velocity-space discretization scheme proposed in [Taitano et al., J. Comput. Phys., 318, 391-420, (2016)] to a spatially inhomogeneous system. In this approach, we normalize the velocity-space coordinate to a temporally and spatially varying local characteristic speed per species. We explicitly consider the resulting inertial terms in the Vlasov equation, and derive a discrete formulation that conserves mass, momentum, and energy up to a prescribed nonlinear tolerance upon convergence. Our conservation strategy employs nonlinear constraints to enforce these properties discretely for both the Vlasov operator and the Fokker-Planck collision operator. Numerical examples of varying degrees of complexity, including shock-wave propagation, demonstrate the favorable efficiency and accuracy properties of the scheme.

  4. A finite difference method for a coupled model of wave propagation in poroelastic materials.

    PubMed

    Zhang, Yang; Song, Limin; Deffenbaugh, Max; Toksöz, M Nafi

    2010-05-01

    A computational method for time-domain multi-physics simulation of wave propagation in a poroelastic medium is presented. The medium is composed of an elastic matrix saturated with a Newtonian fluid, and the method operates on a digital representation of the medium where a distinct material phase and properties are specified at each volume cell. The dynamic response to an acoustic excitation is modeled mathematically with a coupled system of equations: elastic wave equation in the solid matrix and linearized Navier-Stokes equation in the fluid. Implementation of the solution is simplified by introducing a common numerical form for both solid and fluid cells and using a rotated-staggered-grid which allows stable solutions without explicitly handling the fluid-solid boundary conditions. A stability analysis is presented which can be used to select gridding and time step size as a function of material properties. The numerical results are shown to agree with the analytical solution for an idealized porous medium of periodically alternating solid and fluid layers.

  5. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juno, J.; Hakim, A.; TenBarge, J.

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  6. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE PAGES

    Juno, J.; Hakim, A.; TenBarge, J.; ...

    2017-10-10

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  7. User's guide for the computer code COLTS for calculating the coupled laminar and turbulent flow over a Jovian entry probe

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graeves, R. A.

    1980-01-01

    A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.

  8. Virtual-pulse time integral methodology: A new explicit approach for computational dynamics - Theoretical developments for general nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.

  9. The Impact of ARM on Climate Modeling. Chapter 26

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.

    2016-01-01

    Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.

  10. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  11. A finite difference solution for the propagation of sound in near sonic flows

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Lester, H. C.

    1983-01-01

    An explicit time/space finite difference procedure is used to model the propagation of sound in a quasi one-dimensional duct containing high Mach number subsonic flow. Nonlinear acoustic equations are derived by perturbing the time-dependent Euler equations about a steady, compressible mean flow. The governing difference relations are based on a fourth-order, two-step (predictor-corrector) MacCormack scheme. The solution algorithm functions by switching on a time harmonic source and allowing the difference equations to iterate to a steady state. The principal effect of the non-linearities was to shift acoustical energy to higher harmonics. With increased source strengths, wave steepening was observed. This phenomenon suggests that the acoustical response may approach a shock behavior at at higher sound pressure level as the throat Mach number aproaches unity. On a peak level basis, good agreement between the nonlinear finite difference and linear finite element solutions was observed, even through a peak sound pressure level of about 150 dB occurred in the throat region. Nonlinear steady state waveform solutions are shown to be in excellent agreement with a nonlinear asymptotic theory.

  12. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  13. The Relationship of Explicit-Implicit Evaluative Discrepancy to Exercise Dropout in Middle-Aged Adults.

    PubMed

    Berry, Tanya R; Rodgers, Wendy M; Divine, Alison; Hall, Craig

    2018-06-19

    Discrepancies between automatically activated associations (i.e., implicit evaluations) and explicit evaluations of motives (measured with a questionnaire) could lead to greater information processing to resolve discrepancies or self-regulatory failures that may affect behavior. This research examined the relationship of health and appearance exercise-related explicit-implicit evaluative discrepancies, the interaction between implicit and explicit evaluations, and the combined value of explicit and implicit evaluations (i.e., the summed scores) to dropout from a yearlong exercise program. Participants (N = 253) completed implicit health and appearance measures and explicit health and appearance motives at baseline, prior to starting the exercise program. The sum of implicit and explicit appearance measures was positively related to weeks in the program, and discrepancy between the implicit and explicit health measures was negatively related to length of time in the program. Implicit exercise evaluations and their relationships to oft-cited motives such as appearance and health may inform exercise dropout.

  14. From Sommerfeld and Brillouin forerunners to optical precursors

    NASA Astrophysics Data System (ADS)

    Macke, Bruno; Ségard, Bernard

    2013-04-01

    The Sommerfeld and Brillouin forerunners generated in a single-resonance absorbing medium by an incident step-modulated pulse are theoretically considered in the double limit where the susceptibility of the medium is weak and the resonance is narrow. Combining direct Laplace-Fourier integration and calculations by the saddle-point method, we establish an explicit analytical expression of the transmitted field valid at any time, even when the two forerunners significantly overlap. We examine how their complete overlapping, occurring for shorter propagation distances, originates the formation of the unique transient currently named resonant precursor or dynamical beat. We obtain an expression of this transient identical to that usually derived within the slowly varying envelope approximation in spite of the initial discontinuity of the incident field envelope. The dynamical beats and 0π pulses generated by ultrashort incident pulses are also briefly examined.

  15. Aeras: A next generation global atmosphere model

    DOE PAGES

    Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...

    2015-06-01

    Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less

  16. Rule-following as an Anticipatory Act: Interaction in Second Person and an Internal Measurement Model of Dialogue

    NASA Astrophysics Data System (ADS)

    Takahashi, Tatsuji; Gunji, Yukio-Pegio

    2008-10-01

    We pursue anticipation in second person or normative anticipation. As the first step, we make the three concepts second person, internal measurement and asynchroneity clearer by introducing the velocity of logic νl and the velocity of communication νc, in the context of social communication. After proving anticipatory nature of rule-following or language use in general via Kripke's "rule-following paradox," we present a mathematical model expressing the internality essential to second person, taking advantage of equivalences and differences in the formal language theory. As a consequence, we show some advantages of negatively considered concepts and arguments by concretizing them into an elementary and explicit formal model. The time development of the model shows a self-organizing property which never results if we adopt a third person stance.

  17. Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Harman, Richard

    2004-01-01

    There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.

  18. Effects of magnetic, radiation and chemical reaction on unsteady heat and mass transfer flow of an oscillating cylinder

    NASA Astrophysics Data System (ADS)

    Ahmed, Rubel; Rana, B. M. Jewel; Ahmmed, S. F.

    2017-06-01

    The effects of magnetic, radiation and chemical reaction parameters on the unsteady heat and mass transfer boundary layer flow past an oscillating cylinder is considered. The dimensionless momentum, energy and concentration equations are solved numerically by using explicit finite difference method with the help of a computer programming language Compaq visual FORTRAN 6.6a. The obtained results of this study have been discussed for different values of well-known parameters with different time steps. The effect of these parameters on the velocity field, temperature field and concentration field, skin-friction, Nusselt number, streamlines and isotherms has been studied and results are presented by graphically represented by the tabular form quantitatively. The stability and convergence analysis of the solution parameters that have been used in the mathematical model have been tested.

  19. Numerical study of supersonic combustion using a finite rate chemistry model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, T.; Tiwari, S. N.; Kumar, A.; Drummond, J. P.

    1986-01-01

    The governing equations of two-dimensional chemically reacting flows are presented together with a global two-step chemistry model for H2-air combustion. The explicit unsplit MacCormack finite difference algorithm is used to advance the discrete system of the governing equations in time until convergence is attained. The source terms in the species equations are evaluated implicitly to alleviate stiffness associated with fast reactions. With implicit source terms, the species equations give rise to a block-diagonal system which can be solved very efficiently on vector-processing computers. A supersonic reacting flow in an inlet-combustor configuration is calculated for the case where H2 is injected into the flow from the side walls and the strut. Results of the calculation are compared against the results obtained by using a complete reaction model.

  20. Full numerical simulation of coflowing, axisymmetric jet diffusion flames

    NASA Technical Reports Server (NTRS)

    Mahalingam, S.; Cantwell, B. J.; Ferziger, J. H.

    1990-01-01

    The near field of a non-premixed flame in a low speed, coflowing axisymmetric jet is investigated numerically using full simulation. The time-dependent governing equations are solved by a second-order, explicit finite difference scheme and a single-step, finite rate model is used to represent the chemistry. Steady laminar flame results show the correct dependence of flame height on Peclet number and reaction zone thickness on Damkoehler number. Forced simulations reveal a large difference in the instantaneous structure of scalar dissipation fields between nonbuoyant and buoyant cases. In the former, the scalar dissipation marks intense reaction zones, supporting the flamelet concept; however, results suggest that flamelet modeling assumptions need to be reexamined. In the latter, this correspondence breaks down, suggesting that modifications to the flamelet modeling approach are needed in buoyant turbulent diffusion flames.

  1. Continuous Long-Term Modeling of Shallow Groundwater-Surface Water Interaction: Implications for a Wet Prairie Restoration

    NASA Astrophysics Data System (ADS)

    Wijayarathne, D. B.; Gomezdelcampo, E.

    2017-12-01

    The existence of wet prairies is wholly dependent on the groundwater and surface water interaction. Any process that alters this interaction has a significant impact on the eco-hydrology of wet prairies. The Oak Openings Region (OOR) in Northwest Ohio supports globally rare wet prairie habitats and the precious few remaining have been drained by ditches, altering their natural flow and making them an unusually variable and artificial system. The Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model from the US Army Engineer Research and Development Center was used to assess the long-term impacts of land-use change on wet prairie restoration. This study is the first spatially explicit, continuous, long-term modeling approach for understanding the response of the shallow groundwater system of the OOR to human intervention, both positive and negative. The GSSHA model was calibrated using a 2-year weekly time series of water table elevations collected with an array of piezometers in the field. Basic statistical analysis indicates a good fit between observed and simulated water table elevations on a weekly level, though the model was run on an hourly time step and a pixel size of 10 m. Spatially-explicit results show that removal of a local ditch may not drastically change the amount of ponding in the area during spring storms, but large flooding over the entire area would occur if two other ditches are removed. This model is being used by The Nature Conservancy and Toledo Metroparks to develop different scenarios for prairie restoration that minimize its effect on local homeowners.

  2. En Route to Depression: Self-Esteem Discrepancies and Habitual Rumination.

    PubMed

    Phillips, Wendy J; Hine, Donald W

    2016-02-01

    Dual-process models of cognitive vulnerability to depression suggest that some individuals possess discrepant implicit and explicit self-views, such as high explicit and low implicit self-esteem (fragile self-esteem) or low explicit and high implicit self-esteem (damaged self-esteem). This study investigated whether individuals with discrepant self-esteem may employ depressive rumination in an effort to reduce discrepancy-related dissonance, and whether the relationship between self-esteem discrepancy and future depressive symptoms varies as a function of rumination tendencies. Hierarchical regressions examined whether self-esteem discrepancy was associated with rumination in an Australian undergraduate sample at Time 1 (N = 306; M(age) = 29.9), and whether rumination tendencies moderated the relationship between self-esteem discrepancy and depressive symptoms assessed 3 months later (n = 160). Damaged self-esteem was associated with rumination at Time 1. As hypothesized, rumination moderated the relationship between self-esteem discrepancy and depressive symptoms at Time 2, where fragile self-esteem and high rumination tendencies at Time 1 predicted the highest levels of subsequent dysphoria. Results are consistent with dual-process propositions that (a) explicit self-regulation strategies may be triggered when explicit and implicit self-beliefs are incongruent, and (b) rumination may increase the likelihood of depression by expending cognitive resources and/or amplifying negative implicit biases. © 2014 Wiley Periodicals, Inc.

  3. Incorporating parametric uncertainty into population viability analysis models

    USGS Publications Warehouse

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  4. The Finite-Surface Method for incompressible flow: a step beyond staggered grid

    NASA Astrophysics Data System (ADS)

    Hokpunna, Arpiruk; Misaka, Takashi; Obayashi, Shigeru

    2017-11-01

    We present a newly developed higher-order finite surface method for the incompressible Navier-Stokes equations (NSE). This method defines the velocities as a surface-averaged value on the surfaces of the pressure cells. Consequently, the mass conservation on the pressure cells becomes an exact equation. The only things left to approximate is the momentum equation and the pressure at the new time step. At certain conditions, the exact mass conservation enables the explicit n-th order accurate NSE solver to be used with the pressure treatment that is two or four order less accurate without loosing the apparent convergence rate. This feature was not possible with finite volume of finite difference methods. We use Fourier analysis with a model spectrum to determine the condition and found that the range covers standard boundary layer flows. The formal convergence and the performance of the proposed scheme is compared with a sixth-order finite volume method. Finally, the accuracy and performance of the method is evaluated in turbulent channel flows. This work is partially funded by a research colloaboration from IFS, Tohoku university and ASEAN+3 funding scheme from CMUIC, Chiang Mai University.

  5. An efficient mode-splitting method for a curvilinear nearshore circulation model

    USGS Publications Warehouse

    Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.

    2007-01-01

    A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.

  6. Can stereotype threat affect motor performance in the absence of explicit monitoring processes? Evidence using a strength task.

    PubMed

    Chalabaev, Aïna; Brisswalter, Jeanick; Radel, Rémi; Coombes, Stephen A; Easthope, Christopher; Clément-Guillotin, Corentin

    2013-04-01

    Previous evidence shows that stereotype threat impairs complex motor skills through increased conscious monitoring of task performance. Given that one-step motor skills may not be susceptible to these processes, we examined whether performance on a simple strength task may be reduced under stereotype threat. Forty females and males performed maximum voluntary contractions under stereotypical or nullified-stereotype conditions. Results showed that the velocity of force production within the first milliseconds of the contraction decreased in females when the negative stereotype was induced, whereas maximal force did not change. In males, the stereotype induction only increased maximal force. These findings suggest that stereotype threat may impair motor skills in the absence of explicit monitoring processes, by influencing the planning stage of force production.

  7. Scalable algorithms for 3D extended MHD.

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2007-11-01

    In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)

  8. Explicit asymmetric bounds for robust stability of continuous and discrete-time systems

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang; Antsaklis, Panos J.

    1993-01-01

    The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.

  9. Life Expectancy as an Objective Factor of a Subjective Well-Being

    ERIC Educational Resources Information Center

    Papavlassopulos, Nikolas; Keppler, David

    2011-01-01

    The paper has two parts. In the first part we offer a definition of well-being which makes life expectancy an explicit variable. We recognize the importance of happiness as a significant aspect of any definition of well-being, but we side-step the issue of what determines its level or how to measure it, and concentrate instead on the consequences…

  10. Implicit and explicit social mentalizing: dual processes driven by a shared neural network

    PubMed Central

    Van Overwalle, Frank; Vandekerckhove, Marie

    2013-01-01

    Recent social neuroscientific evidence indicates that implicit and explicit inferences on the mind of another person (i.e., intentions, attributions or traits), are subserved by a shared mentalizing network. Under both implicit and explicit instructions, ERP studies reveal that early inferences occur at about the same time, and fMRI studies demonstrate an overlap in core mentalizing areas, including the temporo-parietal junction (TPJ) and the medial prefrontal cortex (mPFC). These results suggest a rapid shared implicit intuition followed by a slower explicit verification processes (as revealed by additional brain activation during explicit vs. implicit inferences). These data provide support for a default-adjustment dual-process framework of social mentalizing. PMID:24062663

  11. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  12. Approaches to rationing antiretroviral treatment: ethical and equity implications.

    PubMed Central

    Bennett, Sara; Chanfreau, Catherine

    2005-01-01

    Despite a growing global commitment to the provision of antiretroviral therapy (ART), its availability is still likely to be less than the need. This imbalance raises ethical dilemmas about who should be granted access to publicly-subsidized ART programmes. This paper reviews the eligibility and targeting criteria used in four case-study countries at different points in the scale-up of ART, with the aim of drawing lessons regarding ethical approaches to rationing. Mexico, Senegal, Thailand and Uganda have each made an explicit policy commitment to provide antiretrovirals to all those in need, but are achieving this goal in steps--beginning with explicit rationing of access to care. Drawing upon the case-studies and experiences elsewhere, categories of explicit rationing criteria have been identified. These include biomedical factors, adherence to treatment, prevention-driven factors, social and economic benefits, financial factors and factors driven by ethical arguments. The initial criteria for determining eligibility are typically clinical criteria and assessment of adherence prospects, followed by a number of other factors. Rationing mechanisms reflect several underlying ethical theories and the ethical underpinnings of explicit rationing criteria should reflect societal values. In order to ensure this alignment, widespread consultation with a variety of stakeholders, and not only policy-makers or physicians, is critical. Without such explicit debate, more rationing will occur implicitly and this may be more inequitable. The effects of rationing mechanisms upon equity are critically dependent upon the implementation processes. As antiretroviral programmes are implemented it is crucial to monitor who gains access to these programmes. PMID:16175829

  13. Computer model of two-dimensional solute transport and dispersion in ground water

    USGS Publications Warehouse

    Konikow, Leonard F.; Bredehoeft, J.D.

    1978-01-01

    This report presents a model that simulates solute transport in flowing ground water. The model is both general and flexible in that it can be applied to a wide range of problem types. It is applicable to one- or two-dimensional problems involving steady-state or transient flow. The model computes changes in concentration over time caused by the processes of convective transport, hydrodynamic dispersion, and mixing (or dilution) from fluid sources. The model assumes that the solute is non-reactive and that gradients of fluid density, viscosity, and temperature do not affect the velocity distribution. However, the aquifer may be heterogeneous and (or) anisotropic. The model couples the ground-water flow equation with the solute-transport equation. The digital computer program uses an alternating-direction implicit procedure to solve a finite-difference approximation to the ground-water flow equation, and it uses the method of characteristics to solve the solute-transport equation. The latter uses a particle- tracking procedure to represent convective transport and a two-step explicit procedure to solve a finite-difference equation that describes the effects of hydrodynamic dispersion, fluid sources and sinks, and divergence of velocity. This explicit procedure has several stability criteria, but the consequent time-step limitations are automatically determined by the program. The report includes a listing of the computer program, which is written in FORTRAN IV and contains about 2,000 lines. The model is based on a rectangular, block-centered, finite difference grid. It allows the specification of any number of injection or withdrawal wells and of spatially varying diffuse recharge or discharge, saturated thickness, transmissivity, boundary conditions, and initial heads and concentrations. The program also permits the designation of up to five nodes as observation points, for which a summary table of head and concentration versus time is printed at the end of the calculations. The data input formats for the model require three data cards and from seven to nine data sets to describe the aquifer properties, boundaries, and stresses. The accuracy of the model was evaluated for two idealized problems for which analytical solutions could be obtained. In the case of one-dimensional flow the agreement was nearly exact, but in the case of plane radial flow a small amount of numerical dispersion occurred. An analysis of several test problems indicates that the error in the mass balance will be generally less than 10 percent. The test problems demonstrated that the accuracy and precision of the numerical solution is sensitive to the initial number of particles placed in each cell and to the size of the time increment, as determined by the stability criteria. Mass balance errors are commonly the greatest during the first several time increments, but tend to decrease and stabilize with time.

  14. Learning to predict chemical reactions.

    PubMed

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal ( http://cdb.ics.uci.edu) under the Toolkits section.

  15. Segregation of Brain Structural Networks Supports Spatio-Temporal Predictive Processing.

    PubMed

    Ciullo, Valentina; Vecchio, Daniela; Gili, Tommaso; Spalletta, Gianfranco; Piras, Federica

    2018-01-01

    The ability to generate probabilistic expectancies regarding when and where sensory stimuli will occur, is critical to derive timely and accurate inferences about updating contexts. However, the existence of specialized neural networks for inferring predictive relationships between events is still debated. Using graph theoretical analysis applied to structural connectivity data, we tested the extent of brain connectivity properties associated with spatio-temporal predictive performance across 29 healthy subjects. Participants detected visual targets appearing at one out of three locations after one out of three intervals; expectations about stimulus location (spatial condition) or onset (temporal condition) were induced by valid or invalid symbolic cues. Connectivity matrices and centrality/segregation measures, expressing the relative importance of, and the local interactions among specific cerebral areas respect to the behavior under investigation, were calculated from whole-brain tractography and cortico-subcortical parcellation. Results: Response preparedness to cued stimuli relied on different structural connectivity networks for the temporal and spatial domains. Significant covariance was observed between centrality measures of regions within a subcortical-fronto-parietal-occipital network -comprising the left putamen, the right caudate nucleus, the left frontal operculum, the right inferior parietal cortex, the right paracentral lobule and the right superior occipital cortex-, and the ability to respond after a short cue-target delay suggesting that the local connectedness of such nodes plays a central role when the source of temporal expectation is explicit. When the potential for functional segregation was tested, we found highly clustered structural connectivity across the right superior, the left middle inferior frontal gyrus and the left caudate nucleus as related to explicit temporal orienting. Conversely, when the interaction between explicit and implicit temporal orienting processes was considered at the long interval, we found that explicit processes were related to centrality measures of the bilateral inferior parietal lobule. Degree centrality of the same region in the left hemisphere covaried with behavioral measures indexing the process of attentional re-orienting. These results represent a crucial step forward the ordinary predictive processing description, as we identified the patterns of connectivity characterizing the brain organization associated with the ability to generate and update temporal expectancies in case of contextual violations.

  16. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speedmore » plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.« less

  17. A finite elements method to solve the Bloch-Torrey equation applied to diffusion magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis

    2014-04-01

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.

  18. A microgenetic study of learning about the molecular theory of matter and chemical reactions

    NASA Astrophysics Data System (ADS)

    Chinn, Clark Allen

    This paper reports the results of an experimental microgenetic study of children learning complex knowledge from text and experiments. The study had two goals. The first was to investigate fine-grained, moment-to-moment changes in knowledge as middle-school students learned about molecules and chemical reactions over thirteen sessions. The second was to investigate the effects of two instructional treatments, one using implicit textbook explanations and one using explicit explanations developed according to a theory of how scientific knowledge is structured. In the study, 61 sixth- and seventh-graders worked one on one with undergraduate instructors in eleven sessions of about 50 to 80 minutes. The instructors guided the students in conducting experiments and thinking out loud about texts. Topics studied included molecules, states of matter, chemical reactions, and heat transfer. A dense array of questions provided a detailed picture of children's moment-to-moment and day-to-day changes in knowledge. Three results chapters address students' preinstructional knowledge, the effects of the experimental treatment at posttest, and five detailed case studies of students' step-by-step knowledge change over eleven sessions. The chapter on preinstructional knowledge discussed three aspects of global knowledge change: conceptual change, coherence, and entrenchment. Notably, this chapter provides systematic evidence that children's knowledge was fragmented and that consistency with general unifying principles did not guarantee a highly coherent body of knowledge. The experimental manipulation revealed a strong advantage for explicit explanations over implicit textbook explanations. Multiple explicit explanations (e.g., highly explicit explanations of three or four chemical reactions) appeared to be necessary for students to master key concepts. Microgenetic analyses of five cases addressed eight empirical issues that should be addressed by any theory of knowledge acquisition: (a) the nature of the overall knowledge change, (b) the progression of intermediate states during knowledge change, (c) initiators of knowledge change, (d) interactions of prior background knowledge and prior domain knowledge during knowledge changes, (e) the fate of old and new knowledge, (f) the relationship between belief and knowledge, (g) changes in meta-awareness, and (h) factors that influenced the course of knowledge change.

  19. A three-dimensional method-of-characteristics solute-transport model (MOC3D)

    USGS Publications Warehouse

    Konikow, Leonard F.; Goode, D.J.; Hornberger, G.Z.

    1996-01-01

    This report presents a model, MOC3D, that simulates three-dimensional solute transport in flowing ground water. The model computes changes in concentration of a single dissolved chemical constituent over time that are caused by advective transport, hydrodynamic dispersion (including both mechanical dispersion and diffusion), mixing (or dilution) from fluid sources, and mathematically simple chemical reactions (including linear sorption, which is represented by a retardation factor, and decay). The transport model is integrated with MODFLOW, a three-dimensional ground-water flow model that uses implicit finite-difference methods to solve the transient flow equation. MOC3D uses the method of characteristics to solve the transport equation on the basis of the hydraulic gradients computed with MODFLOW for a given time step. This implementation of the method of characteristics uses particle tracking to represent advective transport and explicit finite-difference methods to calculate the effects of other processes. However, the explicit procedure has several stability criteria that may limit the size of time increments for solving the transport equation; these are automatically determined by the program. For improved efficiency, the user can apply MOC3D to a subgrid of the primary MODFLOW grid that is used to solve the flow equation. However, the transport subgrid must have uniform grid spacing along rows and columns. The report includes a description of the theoretical basis of the model, a detailed description of input requirements and output options, and the results of model testing and evaluation. The model was evaluated for several problems for which exact analytical solutions are available and by benchmarking against other numerical codes for selected complex problems for which no exact solutions are available. These test results indicate that the model is very accurate for a wide range of conditions and yields minimal numerical dispersion for advection-dominated problems. Mass-balance errors are generally less than 10 percent, and tend to decrease and stabilize with time.

  20. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  1. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  2. Are Financial Incentives for Lifestyle Behavior Change Informed or Inspired by Behavioral Economics? A Mapping Review.

    PubMed

    McGill, Bronwyn; O'Hara, Blythe J; Bauman, Adrian; Grunseit, Anne C; Phongsavan, Philayrath

    2018-01-01

    To identify the behavioral economics (BE) conceptual underpinnings of lifestyle financial incentive (FI) interventions. A mapping review of peer-reviewed literature was conducted by searching electronic databases. Inclusion criteria were real-world FI interventions explicitly mentioning BE, targeting individuals, or populations with lifestyle-related behavioral outcomes. Exclusion criteria were hypothetical studies, health professional focus, clinically oriented interventions. Study characteristics were tabulated according to purpose, categorization of BE concepts and FI types, design, outcome measures, study quality, and findings. Data Synthesis and Analysis: Financial incentives were categorized according to type and payment structure. Behavioral economics concepts explicitly used in the intervention design were grouped based on common patterns of thinking. The interplay between FI types, BE concepts, and outcome was assessed. Seventeen studies were identified from 1452 unique records. Analysis showed 76.5% (n = 13) of studies explicitly incorporated BE concepts. Six studies provided clear theoretical justification for the inclusion of BE. No pattern in the type of FI and BE concepts used was apparent. Not all FI interventions claiming BE inclusion did so. For interventions that explicitly included BE, the degree to which this was portrayed and woven into the design varied. This review identified BE concepts common to FI interventions, a first step in providing emergent and pragmatic information to public health and health promotion program planners.

  3. Modeling SOA formation from the oxidation of intermediate volatility n-alkanes

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.

    2012-08-01

    The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.

  4. Modeling SOA formation from the oxidation of intermediate volatility n-alkanes

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.

    2012-06-01

    The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.

  5. A Study of the Response of Deep Tropical Clouds to Mesoscale Processes. Part 2; Sensitivities to Microphysics, Radiation, and Surface Fluxes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel; Tao, Wei-Kuo; Simpson, Joanne

    2004-01-01

    The Goddard Cumulus Ensemble (GCE) model is used to examine the sensitivities of surface fluxes, explicit radiation, and ice microphysical processes on multi-day simulations of deep tropical convection over the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). The simulations incorporate large-scale advective temperature and moisture forcing, as well as large-scale momentum, that are updated every time step on a periodic lateral boundary grid. This study shows that when surface fluxes are eliminated, the mean atmosphere is much cooler and drier, convection and CAPE are much weaker, precipitation is less, and cloud coverage in stratiform regions much greater. Surface fluxes using the TOGA COARE flux algorithm are weaker than with the aerodynamic formulation, but closer to the observed fluxes. In addition, similar trends noted above for the case without surface fluxes are produced for the TOGA flux case, albeit to a much lesser extent. The elimination of explicit shortwave and longwave radiation is found to have only minimal effects on the mean thermodynamics, convection, and precipitation. However explicit radiation does have a significant impact on cloud temperatures and structure above 200 mb and on the overall mean vertical circulation. The removal of ice processes produces major changes in the structure of the cloud. Much of the liquid water is transported aloft and into anvils above the melting layer (600 mb), leaving narrow, but intense bands of rainfall in convective regions. The elimination of melting processes leads to greater hydrometeor mass below the melting layer, and produces a much warmer and moister boundary layer, leading to a greater mean CAPE. Finally, the elimination of the graupel species has only a small impact on mean total precipitation, thermodynamics, and dynamics of the simulation, but does produce much greater snow mass just above the melting layer. Some of these results differ from previous CRM studies of tropical systems, which is likely due to the type of simulated system, total time integration, and model setup.

  6. Increased Specificity of Wechsler Adult Intelligence Scale-Third Edition Matrix Reasoning Test Instructions and Time Limits

    ERIC Educational Resources Information Center

    Callens, Andy M.; Atchison, Timothy B.; Engler, Rachel R.

    2009-01-01

    Instructions for the Matrix Reasoning Test (MRT) of the Wechsler Adult Intelligence Scale-Third Edition were modified by explicitly stating that the subtest was untimed or that a per-item time limit would be imposed. The MRT was administered within one of four conditions: with (a) standard administration instructions, (b) explicit instructions…

  7. An overview on STEP-NC compliant controller development

    NASA Astrophysics Data System (ADS)

    Othman, M. A.; Minhat, M.; Jamaludin, Z.

    2017-10-01

    The capabilities of conventional Computer Numerical Control (CNC) machine tools as termination organiser to fabricate high-quality parts promptly, economically and precisely are undeniable. To date, most CNCs follow the programming standard of ISO 6983, also called G & M code. However, in fluctuating shop floor environment, flexibility and interoperability of current CNC system to react dynamically and adaptively are believed still limited. This outdated programming language does not explicitly relate to each other to have control of arbitrary locations other than the motion of the block-by-block. To address this limitation, new standard known as STEP-NC was developed in late 1990s and is formalized as an ISO 14649. It adds intelligence to the CNC in term of interoperability, flexibility, adaptability and openness. This paper presents an overview of the research work that have been done in developing a STEP-NC controller standard and the capabilities of STEP-NC to overcome modern manufacturing demands. Reviews stated that most existing STEP-NC controller prototypes are based on type 1 and type 2 implementation levels. There are still lack of effort being done to develop type 3 and type 4 STEP-NC compliant controller.

  8. Steps to a HealthierUS Cooperative Agreement Program: foundational elements for program evaluation planning, implementation, and use of findings.

    PubMed

    MacDonald, Goldie; Garcia, Danyael; Zaza, Stephanie; Schooley, Michael; Compton, Don; Bryant, Terry; Bagnol, Lulu; Edgerly, Cathy; Haverkate, Rick

    2006-01-01

    The Steps to a HealthierUS Cooperative Agreement Program (Steps Program) enables funded communities to implement chronic disease prevention and health promotion efforts to reduce the burden of diabetes, obesity, asthma, and related risk factors. At both the national and community levels, investment in surveillance and program evaluation is substantial. Public health practitioners engaged in program evaluation planning often identify desired outcomes, related indicators, and data collection methods but may pay only limited attention to an overarching vision for program evaluation among participating sites. We developed a set of foundational elements to provide a vision of program evaluation that informs the technical decisions made throughout the evaluation process. Given the diversity of activities across the Steps Program and the need for coordination between national- and community-level evaluation efforts, our recommendations to guide program evaluation practice are explicit yet leave room for site-specific context and needs. Staff across the Steps Program must consider these foundational elements to prepare a formal plan for program evaluation. Attention to each element moves the Steps Program closer to well-designed and complementary plans for program evaluation at the national, state, and community levels.

  9. Low-storage implicit/explicit Runge-Kutta schemes for the simulation of stiff high-dimensional ODE systems

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas

    2015-04-01

    Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.

  10. Understanding how biodiversity unfolds through time under neutral theory.

    PubMed

    Missa, Olivier; Dytham, Calvin; Morlon, Hélène

    2016-04-05

    Theoretical predictions for biodiversity patterns are typically derived under the assumption that ecological systems have reached a dynamic equilibrium. Yet, there is increasing evidence that various aspects of ecological systems, including (but not limited to) species richness, are not at equilibrium. Here, we use simulations to analyse how biodiversity patterns unfold through time. In particular, we focus on the relative time required for various biodiversity patterns (macroecological or phylogenetic) to reach equilibrium. We simulate spatially explicit metacommunities according to the Neutral Theory of Biodiversity (NTB) under three modes of speciation, which differ in how evenly a parent species is split between its two daughter species. We find that species richness stabilizes first, followed by species area relationships (SAR) and finally species abundance distributions (SAD). The difference in timing of equilibrium between these different macroecological patterns is the largest when the split of individuals between sibling species at speciation is the most uneven. Phylogenetic patterns of biodiversity take even longer to stabilize (tens to hundreds of times longer than species richness) so that equilibrium predictions from neutral theory for these patterns are unlikely to be relevant. Our results suggest that it may be unwise to assume that biodiversity patterns are at equilibrium and provide a first step in studying how these patterns unfold through time. © 2016 The Author(s).

  11. Understanding how biodiversity unfolds through time under neutral theory

    PubMed Central

    2016-01-01

    Theoretical predictions for biodiversity patterns are typically derived under the assumption that ecological systems have reached a dynamic equilibrium. Yet, there is increasing evidence that various aspects of ecological systems, including (but not limited to) species richness, are not at equilibrium. Here, we use simulations to analyse how biodiversity patterns unfold through time. In particular, we focus on the relative time required for various biodiversity patterns (macroecological or phylogenetic) to reach equilibrium. We simulate spatially explicit metacommunities according to the Neutral Theory of Biodiversity (NTB) under three modes of speciation, which differ in how evenly a parent species is split between its two daughter species. We find that species richness stabilizes first, followed by species area relationships (SAR) and finally species abundance distributions (SAD). The difference in timing of equilibrium between these different macroecological patterns is the largest when the split of individuals between sibling species at speciation is the most uneven. Phylogenetic patterns of biodiversity take even longer to stabilize (tens to hundreds of times longer than species richness) so that equilibrium predictions from neutral theory for these patterns are unlikely to be relevant. Our results suggest that it may be unwise to assume that biodiversity patterns are at equilibrium and provide a first step in studying how these patterns unfold through time. PMID:26977066

  12. On Feeling Torn About One’s Sexuality

    PubMed Central

    Windsor-Shellard, Ben

    2014-01-01

    Three studies offer novel evidence addressing the consequences of explicit–implicit sexual orientation (SO) ambivalence. In Study 1, self-identified straight females completed explicit and implicit measures of SO. The results revealed that participants with greater SO ambivalence took longer responding to explicit questions about their sexual preferences, an effect moderated by the direction of ambivalence. Study 2 replicated this effect using a different paradigm. Study 3 included self-identified straight and gay female and male participants; participants completed explicit and implicit measures of SO, plus measures of self-esteem and affect regarding their SO. Among straight participants, the response time results replicated the findings of Studies 1 and 2. Among gay participants, trends suggested that SO ambivalence influenced time spent deliberating on explicit questions relevant to sexuality, but in a different way. Furthermore, the amount and direction of SO ambivalence was related to self-esteem. PMID:24972940

  13. Corrigenda of 'explicit wave-averaged primitive equations using a generalized Lagrangian Mean'

    NASA Astrophysics Data System (ADS)

    Ardhuin, F.; Rascle, N.; Belibassakis, K. A.

    2017-05-01

    Ardhuin et al. (2008) gave a second-order approximation in the wave slope of the exact Generalized Lagrangian Mean (GLM) equations derived by Andrews and McIntyre (1978), and also performed a coordinate transformation, going from GLM to a 'GLMz' set of equations. That latter step removed the wandering of the GLM mean sea level away from the Eulerian-mean sea level, making the GLMz flow non-divergent. That step contained some inaccuarate statements about the coordinate transformation, while the rest of the paper contained an error on the surface dynamic boundary condition for viscous stresses. I am thankful to Mathias Delpey and Hidenori Aiki for pointing out these errors, which are corrected below.

  14. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  15. Probabilistic Plan Management

    DTIC Science & Technology

    2009-11-17

    set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this

  16. FY06 NRL DoD High Performance Computing Modernization Program Annual Reports

    DTIC Science & Technology

    2007-10-31

    our simulations yield important new information on the amount and form of the energy that is released by these explosive events. These results...coupled with the ideal-gas equation of state and a one-step Arrhenuis kinetics of energy release. The equations are solved using the explicit...practical applications, including hydrogen safety and pulse -detonation engines (PDE). For example, the results summarizing the effect of obstacle

  17. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  18. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  19. Effect of wall-mediated hydrodynamic fluctuations on the kinetics of a Brownian nanoparticle

    NASA Astrophysics Data System (ADS)

    Yu, Hsiu-Yu; Eckmann, David M.; Ayyaswamy, Portonovo S.; Radhakrishnan, Ravi

    2016-12-01

    The reactive flux formalism (Chandler 1978 J. Chem. Phys. 68, 2959-2970. (doi:10.1063/1.436049)) and the subsequent development of methods such as transition path sampling have laid the foundation for explicitly quantifying the rate process in terms of microscopic simulations. However, explicit methods to account for how the hydrodynamic correlations impact the transient reaction rate are missing in the colloidal literature. We show that the composite generalized Langevin equation (Yu et al. 2015 Phys. Rev. E 91, 052303. (doi:10.1103/PhysRevE.91.052303)) makes a significant step towards solving the coupled processes of molecular reactions and hydrodynamic relaxation by examining how the wall-mediated hydrodynamic memory impacts the two-stage temporal relaxation of the reaction rate for a nanoparticle transition between two bound states in the bulk, near-wall and lubrication regimes.

  20. sl(1|2) Super-Toda Fields

    NASA Astrophysics Data System (ADS)

    Yang, Zhan-Ying; Xue, Pan-Pan; Zhao, Liu; Shi, Kang-Jie

    2008-11-01

    Explicit exact solution of supersymmetric Toda fields associated with the Lie superalgebra sl(2|1) is constructed. The approach used is a super extension of Leznov Saveliev algebraic analysis, which is based on a pair of chiral and antichiral Drienfeld Sokolov systems. Though such approach is well understood for Toda field theories associated with ordinary Lie algebras, its super analogue was only successful in the super Liouville case with the underlying Lie superalgebra osp(1|2). The problem lies in that a key step in the construction makes use of the tensor product decomposition of the highest weight representations of the underlying Lie superalgebra, which is not clear until recently. So our construction made in this paper presents a first explicit example of Leznov Saveliev analysis for super Toda systems associated with underlying Lie superalgebras of the rank higher than 1.

  1. The Presence of Turbulent and Ordered Local Structure within the ICME Shock-sheath and Its Contribution to Forbush Decrease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaikh, Zubair; Bhaskar, Ankush; Raghav, Anil, E-mail: raghavanil1984@gmail.com

    The transient interplanetary disturbances evoke short-time cosmic-ray flux decrease, which is known as Forbush decrease. The traditional model and understanding of Forbush decrease suggest that the sub-structure of an interplanetary counterpart of coronal mass ejection (ICME) independently contributes to cosmic-ray flux decrease. These sub-structures, shock-sheath, and magnetic cloud (MC) manifest as classical two-step Forbush decrease. The recent work by Raghav et al. has shown multi-step decreases and recoveries within the shock-sheath. However, this cannot be explained by the ideal shock-sheath barrier model. Furthermore, they suggested that local structures within the ICME’s sub-structure (MC and shock-sheath) could explain this deviation ofmore » the FD profile from the classical FD. Therefore, the present study attempts to investigate the cause of multi-step cosmic-ray flux decrease and respective recovery within the shock-sheath in detail. A 3D-hodogram method is utilized to obtain more details regarding the local structures within the shock-sheath. This method unambiguously suggests the formation of small-scale local structures within the ICME (shock-sheath and even in MC). Moreover, the method could differentiate the turbulent and ordered interplanetary magnetic field (IMF) regions within the sub-structures of ICME. The study explicitly suggests that the turbulent and ordered IMF regions within the shock-sheath do influence cosmic-ray variations differently.« less

  2. Data Assimilation by delay-coordinate nudging

    NASA Astrophysics Data System (ADS)

    Pazo, Diego; Lopez, Juan Manuel; Carrassi, Alberto

    2016-04-01

    A new nudging method for data assimilation, delay-coordinate nudging, is presented. Delay-coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time-step. Numerical experiments with a low order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an un-optimized formulation of the delay-nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay-coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal-to-decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures.

  3. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    PubMed

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  4. Aerodynamics of Engine-Airframe Interaction

    NASA Technical Reports Server (NTRS)

    Caughey, D. A.

    1986-01-01

    The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.

  5. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  6. Thermal-Acoustic Analysis of a Metallic Integrated Thermal Protection System Structure

    NASA Technical Reports Server (NTRS)

    Behnke, Marlana N.; Sharma, Anurag; Przekop, Adam; Rizzi, Stephen A.

    2010-01-01

    A study is undertaken to investigate the response of a representative integrated thermal protection system structure under combined thermal, aerodynamic pressure, and acoustic loadings. A two-step procedure is offered and consists of a heat transfer analysis followed by a nonlinear dynamic analysis under a combined loading environment. Both analyses are carried out in physical degrees-of-freedom using implicit and explicit solution techniques available in the Abaqus commercial finite-element code. The initial study is conducted on a reduced-size structure to keep the computational effort contained while validating the procedure and exploring the effects of individual loadings. An analysis of a full size integrated thermal protection system structure, which is of ultimate interest, is subsequently presented. The procedure is demonstrated to be a viable approach for analysis of spacecraft and hypersonic vehicle structures under a typical mission cycle with combined loadings characterized by largely different time-scales.

  7. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  8. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  9. Doctors and managers: poor relationships may be damaging patients—what can be done?

    PubMed Central

    Edwards, N

    2003-01-01

    The problem of poor relationships between doctors and managers is a common feature of many healthcare systems. This problem needs to be explicitly addressed and there are a number of positive steps that could be taken. Firstly, there would be value in working to improve the quality of relationships and better mutual understanding of the necessarily different positions of doctors and managers. Finding a common approach to managing resources, accountability, autonomy, and the creation of more systematic ways of working seems to be important. The use of costed clinical pathways may be one approach. Rather than seeing guidelines and accountability systems as a threat to autonomy there is an argument that they are an essential adjunct to it. Redefining autonomy in order to preserve it and to ensure that it encompasses accountability and responsibility will be an important step. A key step is the development of clinical leadership. PMID:14645744

  10. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  11. Modeling Bloch oscillations in nanoscale Josephson junctions.

    PubMed

    Vora, Heli; Kautz, R L; Nam, S W; Aumentado, J

    2017-08-01

    Bloch oscillations in nanoscale Josephson junctions with a Coulomb charging energy comparable to the Josephson coupling energy are explored within the context of a model previously considered by Geigenmüller and Schön that includes Zener tunneling and treats quasiparticle tunneling as an explicit shot-noise process. The dynamics of the junction quasicharge are investigated numerically using both Monte Carlo and ensemble approaches to calculate voltage-current characteristics in the presence of microwaves. We examine in detail the origin of harmonic and subharmonic Bloch steps at dc biases I = ( n/m )2 ef induced by microwaves of frequency f and consider the optimum parameters for the observation of harmonic ( m = 1) steps. We also demonstrate that the GS model allows a detailed semiquantitative fit to experimental voltage-current characteristics previously obtained at the Chalmers University of Technology, confirming and strengthening the interpretation of the observed microwave-induced steps in terms of Bloch oscillations.

  12. Modeling Bloch oscillations in nanoscale Josephson junctions

    PubMed Central

    Vora, Heli; Kautz, R. L.; Nam, S. W.; Aumentado, J.

    2018-01-01

    Bloch oscillations in nanoscale Josephson junctions with a Coulomb charging energy comparable to the Josephson coupling energy are explored within the context of a model previously considered by Geigenmüller and Schön that includes Zener tunneling and treats quasiparticle tunneling as an explicit shot-noise process. The dynamics of the junction quasicharge are investigated numerically using both Monte Carlo and ensemble approaches to calculate voltage-current characteristics in the presence of microwaves. We examine in detail the origin of harmonic and subharmonic Bloch steps at dc biases I = (n/m)2ef induced by microwaves of frequency f and consider the optimum parameters for the observation of harmonic (m = 1) steps. We also demonstrate that the GS model allows a detailed semiquantitative fit to experimental voltage-current characteristics previously obtained at the Chalmers University of Technology, confirming and strengthening the interpretation of the observed microwave-induced steps in terms of Bloch oscillations. PMID:29577106

  13. Simulations of precipitation using the Community Earth System Model (CESM): Sensitivity to microphysics time step

    NASA Astrophysics Data System (ADS)

    Murthi, A.; Menon, S.; Sednev, I.

    2011-12-01

    An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global means. In order to gain some insight into the possible causes of the observed differences, future work would involve performing additional sensitivity tests using the single column model version of CAM 5.1 to gauge the effect of τ on calculations of source terms and mixing ratios used to calculate precipitation in the budget equations.

  14. Explicit reference governor for linear systems

    NASA Astrophysics Data System (ADS)

    Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo

    2018-06-01

    The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.

  15. Changes of Explicit and Implicit Stigma in Medical Students during Psychiatric Clerkship.

    PubMed

    Wang, Peng-Wei; Ko, Chih-Hung; Chen, Cheng-Sheng; Yang, Yi-Hsin Connine; Lin, Huang-Chi; Cheng, Cheng-Chung; Tsang, Hin-Yeung; Wu, Ching-Kuan; Yen, Cheng-Fang

    2016-04-01

    This study examines the differences in explicit and implicit stigma between medical and non-medical undergraduate students at baseline; the changes of explicit and implicit stigma in medical undergraduate and non-medical undergraduate students after a 1-month psychiatric clerkship and 1-month follow-up period; and the differences in the changes of explicit and implicit stigma between medical and non-medical undergraduate students. Seventy-two medical undergraduate students and 64 non-medical undergraduate students were enrolled. All participants were interviewed at intake and after 1 month. The Taiwanese version of the Stigma Assessment Scale and the Implicit Association Test were used to measure the participants' explicit and implicit stigma. Neither explicit nor implicit stigma differed between two groups at baseline. The medical, but not the non-medical, undergraduate students had a significant decrease in explicit stigma during the 1-month period of follow-up. Neither the medical nor the non-medical undergraduate students exhibited a significant change in implicit stigma during the one-month of follow-up, however. There was an interactive effect between group and time on explicit stigma but not on implicit stigma. Explicit but not implicit stigma toward mental illness decreased in the medical undergraduate students after a psychiatric clerkship. Further study is needed to examine how to improve implicit stigma toward mental illness.

  16. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    NASA Astrophysics Data System (ADS)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  17. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  18. Classical space-times from the S-matrix

    NASA Astrophysics Data System (ADS)

    Neill, Duff; Rothstein, Ira Z.

    2013-12-01

    We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.

  19. Two dimensional numerical prediction of deflagration-to-detonation transition in porous energetic materials.

    PubMed

    Narin, B; Ozyörük, Y; Ulas, A

    2014-05-30

    This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  1. Assessing the Role of Climate Variability on Liver Fluke Risk in the UK Through Mechanistic Hydro-Epidemiological Modelling

    NASA Astrophysics Data System (ADS)

    Beltrame, L.; Dunne, T.; Rose, H.; Walker, J.; Morgan, E.; Vickerman, P.; Wagener, T.

    2016-12-01

    Liver fluke is a flatworm parasite infecting grazing animals worldwide. In the UK, it causes considerable production losses to cattle and sheep industries and costs farmers millions of pounds each year due to reduced growth rates and lower milk yields. Large part of the parasite life-cycle takes place outside of the host, with its survival and development strongly controlled by climatic and hydrologic conditions. Evidence of climate-driven changes in the distribution and seasonality of fluke disease already exists, as the infection is increasingly expanding to new areas and becoming a year-round problem. Therefore, it is crucial to assess current and potential future impacts of climate variability on the disease to guide interventions at the farm scale and mitigate risk. Climate-based fluke risk models have been available since the 1950s, however, they are based on empirical relationships derived between historical climate and incidence data, and thus are unlikely to be robust for simulating risk under changing conditions. Moreover, they are not dynamic, but estimate risk over large regions in the UK based on monthly average climate conditions, so they do not allow investigating the effects of climate variability for supporting farmers' decisions. In this study, we introduce a mechanistic model for fluke, which represents habitat suitability for disease development at 25m resolution with a daily time step, explicitly linking the parasite life-cycle to key hydro-climate conditions. The model is used on a case study in the UK and sensitivity analysis is performed to better understand the role of climate variability on the space-time dynamics of the disease, while explicitly accounting for uncertainties. Comparisons are presented with experts' knowledge and a widely used empirical model.

  2. Subliminal mere exposure and explicit and implicit positive affective responses.

    PubMed

    Hicks, Joshua A; King, Laura A

    2011-06-01

    Research suggests that repeated subliminal exposure to environmental stimuli enhances positive affective responses. To date, this research has primarily concentrated on the effects of repeated exposure on explicit measures of positive affect (PA). However, recent research suggests that repeated subliminal presentations may increase implicit PA as well. The present study tested this hypothesis. Participants were either subliminally primed with repeated presentations of the same stimuli or only exposed to each stimulus one time. Results confirmed predictions showing that repeated exposure to the same stimuli increased both explicit and implicit PA. Implications for the role of explicit and implicit PA in attitudinal judgements are discussed.

  3. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  4. The Effects of the Timing of Isolated FFI on the Explicit Knowledge and Written Accuracy of Learners with Different Prior Knowledge of the Linguistic Target

    ERIC Educational Resources Information Center

    Shintani, Natsuko

    2017-01-01

    This study examines the effects of the timing of explicit instruction (EI) on grammatical accuracy. A total of 123 learners were divided into two groups: those with some productive knowledge of past-counterfactual conditionals (+Prior Knowledge) and those without such knowledge (-Prior Knowledge). Each group was divided into four conditions. Two…

  5. Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2006-09-01

    tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of

  6. A simple inertial formulation of the shallow water equations for efficient two-dimensional flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Bates, Paul D.; Horritt, Matthew S.; Fewtrell, Timothy J.

    2010-06-01

    SummaryThis paper describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models where flows in the x and y Cartesian directions are decoupled. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to simulation results from the diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ˜1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1/Δ x) 2, the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1/Δ x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. For the tests reported in this paper the maximum speed up achieved over a diffusive storage cell model was 1120×, although the actual value seen will depend on model resolution and water surface gradient. Solutions using the new equation set are shown to be grid-independent for the conditions considered and to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.

  7. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulations Guided by a Coarse-Grained Model.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2015-08-11

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water.

  8. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics—Monte Carlo Simulations Guided by a Coarse-Grained Model

    PubMed Central

    2015-01-01

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water. PMID:26574442

  9. Alcohol-Approach Inclinations and Drinking Identity as Predictors of Behavioral Economic Demand for Alcohol

    PubMed Central

    Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.

    2016-01-01

    Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444

  10. Modeling disease transmission near eradication: An equation free approach

    NASA Astrophysics Data System (ADS)

    Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan

    2015-01-01

    Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.

  11. Forced in-plane vibration of a thick ring on a unilateral elastic foundation

    NASA Astrophysics Data System (ADS)

    Wang, Chunjian; Ayalew, Beshah; Rhyne, Timothy; Cron, Steve; Dailliez, Benoit

    2016-10-01

    Most existing studies of a deformable ring on elastic foundation rely on the assumption of a linear foundation. These assumptions are insufficient in cases where the foundation may have a unilateral stiffness that vanishes in compression or tension such as in non-pneumatic tires and bushing bearings. This paper analyzes the in-plane dynamics of such a thick ring on a unilateral elastic foundation, specifically, on a two-parameter unilateral elastic foundation, where the stiffness of the foundation is treated as linear in the circumferential direction but unilateral (i.e. collapsible or tensionless) in the radial direction. The thick ring is modeled as an orthotropic and extensible circular Timoshenko beam. An arbitrarily distributed time-varying in-plane force is considered as the excitation. The Equations of Motion are explicitly derived and a solution method is proposed that uses an implicit Newmark scheme for the time domain solution and an iterative compensation approach to determine the unilateral zone of the foundation at each time step. The dynamic axle force transmission is also analyzed. Illustrative forced vibration responses obtained from the proposed model and solution method are compared with those obtained from a finite element model.

  12. Forced-Unfolding and Force-Quench Refolding of RNA Hairpins

    PubMed Central

    Hyeon, Changbong; Thirumalai, D.

    2006-01-01

    Nanomanipulation of individual RNA molecules, using laser optical tweezers, has made it possible to infer the major features of their energy landscape. Time-dependent mechanical unfolding trajectories, measured at a constant stretching force (fS) of simple RNA structures (hairpins and three-helix junctions) sandwiched between RNA/DNA hybrid handles show that they unfold in a reversible all-or-none manner. To provide a molecular interpretation of the experiments we use a general coarse-grained off-lattice Gō-like model, in which each nucleotide is represented using three interaction sites. Using the coarse-grained model we have explored forced-unfolding of RNA hairpin as a function of fS and the loading rate (rf). The simulations and theoretical analysis have been done both with and without the handles that are explicitly modeled by semiflexible polymer chains. The mechanisms and timescales for denaturation by temperature jump and mechanical unfolding are vastly different. The directed perturbation of the native state by fS results in a sequential unfolding of the hairpin starting from their ends, whereas thermal denaturation occurs stochastically. From the dependence of the unfolding rates on rf and fS we show that the position of the unfolding transition state is not a constant but moves dramatically as either rf or fS is changed. The transition-state movements are interpreted by adopting the Hammond postulate for forced-unfolding. Forced-unfolding simulations of RNA, with handles attached to the two ends, show that the value of the unfolding force increases (especially at high pulling speeds) as the length of the handles increases. The pathways for refolding of RNA from stretched initial conformation, upon quenching fS to the quench force fQ, are highly heterogeneous. The refolding times, upon force-quench, are at least an order-of-magnitude greater than those obtained by temperature-quench. The long fQ-dependent refolding times starting from fully stretched states are analyzed using a model that accounts for the microscopic steps in the rate-limiting step, which involves the trans to gauche transitions of the dihedral angles in the GAAA tetraloop. The simulations with explicit molecular model for the handles show that the dynamics of force-quench refolding is strongly dependent on the interplay of their contour length and persistence length and the RNA persistence length. Using the generality of our results, we also make a number of precise experimentally testable predictions. PMID:16473903

  13. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  14. Requirements for the formal representation of pathophysiology mechanisms by clinicians

    PubMed Central

    Helvensteijn, M.; Kokash, N.; Martorelli, I.; Sarwar, D.; Islam, S.; Grenon, P.; Hunter, P.

    2016-01-01

    Knowledge of multiscale mechanisms in pathophysiology is the bedrock of clinical practice. If quantitative methods, predicting patient-specific behaviour of these pathophysiology mechanisms, are to be brought to bear on clinical decision-making, the Human Physiome community and Clinical community must share a common computational blueprint for pathophysiology mechanisms. A number of obstacles stand in the way of this sharing—not least the technical and operational challenges that must be overcome to ensure that (i) the explicit biological meanings of the Physiome's quantitative methods to represent mechanisms are open to articulation, verification and study by clinicians, and that (ii) clinicians are given the tools and training to explicitly express disease manifestations in direct contribution to modelling. To this end, the Physiome and Clinical communities must co-develop a common computational toolkit, based on this blueprint, to bridge the representation of knowledge of pathophysiology mechanisms (a) that is implicitly depicted in electronic health records and the literature, with (b) that found in mathematical models explicitly describing mechanisms. In particular, this paper makes use of a step-wise description of a specific disease mechanism as a means to elicit the requirements of representing pathophysiological meaning explicitly. The computational blueprint developed from these requirements addresses the Clinical community goals to (i) organize and manage healthcare resources in terms of relevant disease-related knowledge of mechanisms and (ii) train the next generation of physicians in the application of quantitative methods relevant to their research and practice. PMID:27051514

  15. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  16. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  17. Discontinuous functional for linear-response time-dependent density-functional theory: The exact-exchange kernel and approximate forms

    NASA Astrophysics Data System (ADS)

    Hellgren, Maria; Gross, E. K. U.

    2013-11-01

    We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.

  18. Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Liou, M. S.; Povinelli, L. A.

    1993-01-01

    A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping, and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results shown are a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.

  19. Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Liou, M.-S.; Povinelli, L. A.

    1993-01-01

    A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results are shown a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.

  20. Frequency-dependent hydrodynamic interaction between two solid spheres

    NASA Astrophysics Data System (ADS)

    Jung, Gerhard; Schmid, Friederike

    2017-12-01

    Hydrodynamic interactions play an important role in many areas of soft matter science. In simulations with implicit solvent, various techniques such as Brownian or Stokesian dynamics explicitly include hydrodynamic interactions a posteriori by using hydrodynamic diffusion tensors derived from the Stokes equation. However, this equation assumes the interaction to be instantaneous which is an idealized approximation and only valid on long time scales. In the present paper, we go one step further and analyze the time-dependence of hydrodynamic interactions between finite-sized particles in a compressible fluid on the basis of the linearized Navier-Stokes equation. The theoretical results show that at high frequencies, the compressibility of the fluid has a significant impact on the frequency-dependent pair interactions. The predictions of hydrodynamic theory are compared to molecular dynamics simulations of two nanocolloids in a Lennard-Jones fluid. For this system, we reconstruct memory functions by extending the inverse Volterra technique. The simulation data agree very well with the theory, therefore, the theory can be used to implement dynamically consistent hydrodynamic interactions in the increasingly popular field of non-Markovian modeling.

Top